From b6013a5e689ff4ff8a869aa262c9d04d454f5a71 Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Sun, 10 Mar 2024 11:22:43 +0000 Subject: [PATCH] Version v1.66.0 --- MANUAL.html | 12157 ++++++------ MANUAL.md | 4239 ++++- MANUAL.txt | 11928 +++++++----- bin/use-deadlock-detector | 14 + docs/content/alias.md | 15 + docs/content/azureblob.md | 29 + docs/content/azurefiles.md | 11 + docs/content/b2.md | 18 +- docs/content/box.md | 11 + docs/content/cache.md | 11 + docs/content/changelog.md | 172 +- docs/content/chunker.md | 11 + docs/content/combine.md | 15 + docs/content/commands/rclone.md | 83 +- docs/content/commands/rclone_bisync.md | 48 +- docs/content/commands/rclone_copy.md | 12 +- docs/content/commands/rclone_copyto.md | 3 +- docs/content/commands/rclone_copyurl.md | 28 +- docs/content/commands/rclone_listremotes.md | 4 +- docs/content/commands/rclone_lsf.md | 34 +- docs/content/commands/rclone_mount.md | 45 +- docs/content/commands/rclone_move.md | 11 +- docs/content/commands/rclone_moveto.md | 3 +- docs/content/commands/rclone_nfsmount.md | 929 + docs/content/commands/rclone_serve_dlna.md | 25 +- docs/content/commands/rclone_serve_docker.md | 25 +- docs/content/commands/rclone_serve_ftp.md | 27 +- docs/content/commands/rclone_serve_http.md | 27 +- docs/content/commands/rclone_serve_nfs.md | 31 +- docs/content/commands/rclone_serve_s3.md | 25 +- docs/content/commands/rclone_serve_sftp.md | 27 +- docs/content/commands/rclone_serve_webdav.md | 25 +- docs/content/commands/rclone_sync.md | 34 +- docs/content/compress.md | 11 + docs/content/crypt.md | 27 + docs/content/drive.md | 13 + docs/content/dropbox.md | 11 + docs/content/fichier.md | 11 + docs/content/filefabric.md | 11 + docs/content/flags.md | 80 +- docs/content/ftp.md | 11 + docs/content/googlecloudstorage.md | 11 + docs/content/googlephotos.md | 11 + docs/content/hasher.md | 11 + docs/content/hdfs.md | 11 + docs/content/hidrive.md | 11 + docs/content/http.md | 11 + docs/content/imagekit.md | 11 + docs/content/internetarchive.md | 11 + docs/content/jottacloud.md | 11 + docs/content/koofr.md | 11 + docs/content/linkbox.md | 15 + docs/content/local.md | 13 + docs/content/mailru.md | 11 + docs/content/mega.md | 11 + docs/content/memory.md | 15 + docs/content/netstorage.md | 11 + docs/content/opendrive.md | 11 + docs/content/oracleobjectstorage.md | 14 + docs/content/pcloud.md | 11 + docs/content/pikpak.md | 11 + docs/content/premiumizeme.md | 11 + docs/content/protondrive.md | 11 + docs/content/putio.md | 11 + docs/content/qingstor.md | 11 + docs/content/rc.md | 80 +- docs/content/s3.md | 47 +- docs/content/seafile.md | 11 + docs/content/sftp.md | 11 + docs/content/sharefile.md | 11 + docs/content/sia.md | 11 + docs/content/smb.md | 11 + docs/content/storj.md | 15 + docs/content/sugarsync.md | 11 + docs/content/swift.md | 11 + docs/content/union.md | 11 + docs/content/uptobox.md | 11 + docs/content/webdav.md | 22 + docs/content/yandex.md | 11 + docs/content/zoho.md | 11 + fstest/testserver/init.d/TestFTPVsftpdTLS | 26 + ...go_on_light__vertical_color_800px_2to1.png | Bin 0 -> 23388 bytes rclone.1 | 15704 +++++++++++----- 83 files changed, 31031 insertions(+), 15513 deletions(-) create mode 100755 bin/use-deadlock-detector create mode 100644 docs/content/commands/rclone_nfsmount.md create mode 100755 fstest/testserver/init.d/TestFTPVsftpdTLS create mode 100644 graphics/logo/logo_on_light/logo_on_light__vertical_color_800px_2to1.png diff --git a/MANUAL.html b/MANUAL.html index d16296fa0..c53b0862a 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -81,7 +81,7 @@

rclone(1) User Manual

Nick Craig-Wood

-

Nov 26, 2023

+

Mar 10, 2024

Rclone syncs your files to cloud storage

rclone logo

@@ -127,6 +127,7 @@
  • Copy new or changed files to cloud storage
  • Sync (one way) to make a directory identical
  • +
  • Bisync (two way) to keep two directories in sync bidirectionally
  • Move files to cloud storage deleting the local after verification
  • Check hashes and for missing/extra files
  • Mount your cloud storage as a network disk
  • @@ -139,7 +140,6 @@
  • 1Fichier
  • Akamai Netstorage
  • Alibaba Cloud (Aliyun) Object Storage System (OSS)
  • -
  • Amazon Drive
  • Amazon S3
  • Backblaze B2
  • Box
  • @@ -162,6 +162,7 @@
  • Hetzner Storage Box
  • HiDrive
  • HTTP
  • +
  • ImageKit
  • Internet Archive
  • Jottacloud
  • IBM COS S3
  • @@ -495,7 +496,6 @@ go build
  • 1Fichier
  • Akamai Netstorage
  • Alias
  • -
  • Amazon Drive
  • Amazon S3
  • Backblaze B2
  • Box
  • @@ -610,6 +610,8 @@ destpath/sourcepath/two.txt

    See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.

    For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this:

    rclone copy --max-age 24h --no-traverse /path/to/src remote:
    +

    Rclone will sync the modification times of files and directories if the backend supports it. If metadata syncing is required then use the --metadata flag.

    +

    Note that the modification time and metadata for the root directory will not be synced. See https://github.com/rclone/rclone/issues/7652 for more info.

    Note: Use the -P/--progress flag to view real-time transfer statistics.

    Note: Use the --dry-run or the --interactive/-i flag to test without copying anything.

    rclone copy source:path dest:path [flags]
    @@ -627,7 +629,7 @@ destpath/sourcepath/two.txt --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -641,6 +643,7 @@ destpath/sourcepath/two.txt --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -697,12 +700,50 @@ destpath/sourcepath/two.txt

    It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command if unsure.

    If dest:path doesn't exist, it is created and the source:path contents go there.

    It is not possible to sync overlapping remotes. However, you may exclude the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory.

    +

    Rclone will sync the modification times of files and directories if the backend supports it. If metadata syncing is required then use the --metadata flag.

    +

    Note that the modification time and metadata for the root directory will not be synced. See https://github.com/rclone/rclone/issues/7652 for more info.

    Note: Use the -P/--progress flag to view real-time transfer statistics

    Note: Use the rclone dedupe command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See this forum post for more info.

    +

    Logger Flags

    +

    The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different.

    +

    The --combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.

    + +

    The --dest-after flag writes a list file using the same format flags as lsf (including customizable options for hash, modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but not identical -- it should output an accurate list of what will be on the destination after the sync.

    +

    Note that these logger flags have a few limitations, and certain scenarios are not currently supported:

    + +

    Note also that each file is logged during the sync, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file (which may or may not match what actually DID.)

    rclone sync source:path dest:path [flags]

    Options

    -
          --create-empty-src-dirs   Create empty source dirs on destination after sync
    -  -h, --help                    help for sync
    +
          --absolute                Put a leading / in front of path names
    +      --combined string         Make a combined report of changes to this file
    +      --create-empty-src-dirs   Create empty source dirs on destination after sync
    +      --csv                     Output in CSV format
    +      --dest-after string       Report all files that exist on the dest post-sync
    +      --differ string           Report all non-matching files to this file
    +  -d, --dir-slash               Append a slash to directory names (default true)
    +      --dirs-only               Only list directories
    +      --error string            Report all files with errors (hashing or reading) to this file
    +      --files-only              Only list files (default true)
    +  -F, --format string           Output format - see lsf help for details (default "p")
    +      --hash h                  Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
    +  -h, --help                    help for sync
    +      --match string            Report all matching files to this file
    +      --missing-on-dst string   Report all files missing from the destination to this file
    +      --missing-on-src string   Report all files missing from the source to this file
    +  -s, --separator string        Separator for the items in the format (default ";")
    +  -t, --timeformat string       Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)

    Copy Options

    Flags for anything which can Copy a file.

          --check-first                                 Do all the checks before starting transfers
    @@ -714,7 +755,7 @@ destpath/sourcepath/two.txt
    --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -728,6 +769,7 @@ destpath/sourcepath/two.txt --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -742,6 +784,7 @@ destpath/sourcepath/two.txt --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -796,6 +839,8 @@ destpath/sourcepath/two.txt

    Otherwise for each file in source:path selected by the filters (if any) this will move it into dest:path. If possible a server-side move will be used, otherwise it will copy it (server-side if possible) into dest:path then delete the original (if no errors on copy) in source:path.

    If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.

    See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.

    +

    Rclone will sync the modification times of files and directories if the backend supports it. If metadata syncing is required then use the --metadata flag.

    +

    Note that the modification time and metadata for the root directory will not be synced. See https://github.com/rclone/rclone/issues/7652 for more info.

    Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

    Note: Use the -P/--progress flag to view real-time transfer statistics.

    rclone move source:path dest:path [flags]
    @@ -814,7 +859,7 @@ destpath/sourcepath/two.txt --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -828,6 +873,7 @@ destpath/sourcepath/two.txt --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -1601,23 +1647,35 @@ rclone backend help <backendname>

    Synopsis

    Perform bidirectional synchronization between two paths.

    Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New, Newer, Older, and Deleted files. - Propagate changes on Path1 to Path2, and vice-versa.

    +

    Bisync is in beta and is considered an advanced command, so use with care. Make sure you have read and understood the entire manual (especially the Limitations section) before using, or data loss can result. Questions can be asked in the Rclone Forum.

    See full bisync description for details.

    rclone bisync remote1:path1 remote2:path2 [flags]

    Options

    -
          --check-access              Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
    -      --check-filename string     Filename for --check-access (default: RCLONE_TEST)
    -      --check-sync string         Controls comparison of final listings: true|false|only (default: true) (default "true")
    -      --create-empty-src-dirs     Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
    -      --filters-file string       Read filtering patterns from a file
    -      --force                     Bypass --max-delete safety check and run the sync. Consider using with --verbose
    -  -h, --help                      help for bisync
    -      --ignore-listing-checksum   Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
    -      --localtime                 Use local time in listings (default: UTC)
    -      --no-cleanup                Retain working files (useful for troubleshooting and testing).
    -      --remove-empty-dirs         Remove ALL empty directories at the final cleanup step.
    -      --resilient                 Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!
    -  -1, --resync                    Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.
    -      --workdir string            Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync)
    +
          --backup-dir1 string                   --backup-dir for Path1. Must be a non-overlapping path on the same remote.
    +      --backup-dir2 string                   --backup-dir for Path2. Must be a non-overlapping path on the same remote.
    +      --check-access                         Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
    +      --check-filename string                Filename for --check-access (default: RCLONE_TEST)
    +      --check-sync string                    Controls comparison of final listings: true|false|only (default: true) (default "true")
    +      --compare string                       Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')
    +      --conflict-loser ConflictLoserAction   Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num)
    +      --conflict-resolve string              Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none")
    +      --conflict-suffix string               Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict')
    +      --create-empty-src-dirs                Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
    +      --download-hash                        Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!)
    +      --filters-file string                  Read filtering patterns from a file
    +      --force                                Bypass --max-delete safety check and run the sync. Consider using with --verbose
    +  -h, --help                                 help for bisync
    +      --ignore-listing-checksum              Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
    +      --max-lock Duration                    Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s)
    +      --no-cleanup                           Retain working files (useful for troubleshooting and testing).
    +      --no-slow-hash                         Ignore listing checksums only on backends where they are slow
    +      --recover                              Automatically recover from interruptions without requiring --resync.
    +      --remove-empty-dirs                    Remove ALL empty directories at the final cleanup step.
    +      --resilient                            Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!
    +  -1, --resync                               Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first.
    +      --resync-mode string                   During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none")
    +      --slow-hash-sync-only                  Ignore slow checksums for listings and deltas, but still consider them during sync calls.
    +      --workdir string                       Use custom working dir - useful for testing. (default: {WORKDIR})

    Copy Options

    Flags for anything which can Copy a file.

          --check-first                                 Do all the checks before starting transfers
    @@ -1629,7 +1687,7 @@ rclone backend help <backendname>
    --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -1643,6 +1701,7 @@ rclone backend help <backendname> --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -2218,7 +2277,7 @@ if src is directory --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -2232,6 +2291,7 @@ if src is directory --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -2279,12 +2339,22 @@ if src is directory
  • rclone - Show help for rclone commands, flags and backends.
  • rclone copyurl

    -

    Copy url content to dest.

    +

    Copy the contents of the URL supplied content to dest:path.

    Synopsis

    Download a URL's content and copy it to the destination without saving it in temporary storage.

    -

    Setting --auto-filename will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path. With --auto-filename-header in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename in addition, the resulting file name will be printed.

    +

    Setting --auto-filename will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path.

    +

    With --auto-filename-header in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename in addition, the resulting file name will be printed.

    Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name.

    Setting --stdout or making the output file name - will cause the output to be written to standard output.

    +

    Troublshooting

    +

    If you can't get rclone copyurl to work then here are some things you can try:

    +
    rclone copyurl https://example.com dest:path [flags]

    Options

      -a, --auto-filename     Get the file name from the URL and use it for destination file path
    @@ -2570,11 +2640,11 @@ rclone link --expire 1d remote:path/to/file

    List all the remotes in the config file and defined in environment variables.

    Synopsis

    rclone listremotes lists all the available remotes from the config file.

    -

    When used with the --long flag it lists the types too.

    +

    When used with the --long flag it lists the types and the descriptions too.

    rclone listremotes [flags]

    Options

      -h, --help   help for listremotes
    -      --long   Show the type as well as names
    + --long Show the type and the description as well as names

    See the global flags page for global options not listed here.

    SEE ALSO

    See the config password command for more information on the above.

    Authentication is required for this call.

    +

    config/paths: Reads the config file path and other important paths.

    +

    Returns a JSON object with the following keys:

    + +

    Eg

    +
    {
    +    "cache": "/home/USER/.cache/rclone",
    +    "config": "/home/USER/.rclone.conf",
    +    "temp": "/tmp"
    +}
    +

    See the config paths command for more information on the above.

    +

    Authentication is required for this call.

    config/providers: Shows how providers are configured in the config file.

    Returns a JSON object: - providers - array of objects

    See the config providers command for more information on the above.

    @@ -8602,6 +9072,37 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache }

    This command does not have a command line equivalent so use this instead:

    rclone rc --loopback operations/fsinfo fs=remote:
    +

    operations/hashsum: Produces a hashsum file for all the objects in the path.

    +

    Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.

    +

    This takes the following parameters:

    + +

    If you supply the download flag, it will download the data from the remote and create the hash on the fly. This can be useful for remotes that don't support the given hash or if you really want to check all the data.

    +

    Note that if you wish to supply a checkfile to check hashes against the current files then you should use operations/check instead of operations/hashsum.

    +

    Returns:

    + +

    Example:

    +
    $ rclone rc --loopback operations/hashsum fs=bin hashType=MD5 download=true base64=true
    +{
    +    "hashType": "md5",
    +    "hashsum": [
    +        "WTSVLpuiXyJO_kGzJerRLg==  backend-versions.sh",
    +        "v1b_OlWCJO9LtNq3EIKkNQ==  bisect-go-rclone.sh",
    +        "VHbmHzHh4taXzgag8BAIKQ==  bisect-rclone.sh",
    +    ]
    +}
    +

    See the hashsum command for more information on the above.

    +

    Authentication is required for this call.

    operations/list: List the given remote and path in JSON format

    This takes the following parameters:

    See bisync command help and full bisync description for more information.

    @@ -9100,15 +9603,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - -Amazon Drive -MD5 -- -Yes -No -R -- - - Amazon S3 (or S3 compatible) MD5 R/W @@ -9117,7 +9611,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W RWU - + Backblaze B2 SHA1 R/W @@ -9126,7 +9620,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + Box SHA1 R/W @@ -9135,7 +9629,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Citrix ShareFile MD5 R/W @@ -9144,7 +9638,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Dropbox DBHASH ¹ R @@ -9153,7 +9647,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Enterprise File Fabric - R/W @@ -9162,7 +9656,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + FTP - R/W ¹⁰ @@ -9171,7 +9665,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Google Cloud Storage MD5 R/W @@ -9180,16 +9674,16 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + Google Drive MD5, SHA1, SHA256 -R/W +DR/W No Yes R/W -- +DRWU - + Google Photos - - @@ -9198,7 +9692,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + HDFS - R/W @@ -9207,7 +9701,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + HiDrive HiDrive ¹² R/W @@ -9216,7 +9710,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + HTTP - R @@ -9225,7 +9719,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + Internet Archive MD5, SHA1, CRC32 R/W ¹¹ @@ -9234,7 +9728,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - RWU - + Jottacloud MD5 R/W @@ -9243,7 +9737,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R RW - + Koofr MD5 - @@ -9252,7 +9746,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Linkbox - R @@ -9261,7 +9755,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Mail.ru Cloud Mailru ⁶ R/W @@ -9270,7 +9764,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Mega - - @@ -9279,7 +9773,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Memory MD5 R/W @@ -9288,7 +9782,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Microsoft Azure Blob Storage MD5 R/W @@ -9297,7 +9791,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + Microsoft Azure Files Storage MD5 R/W @@ -9306,16 +9800,16 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + Microsoft OneDrive QuickXorHash ⁵ -R/W +DR/W Yes No R -- +DRW - + OpenDrive MD5 R/W @@ -9324,7 +9818,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + OpenStack Swift MD5 R/W @@ -9333,7 +9827,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + Oracle Object Storage MD5 R/W @@ -9342,7 +9836,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + pCloud MD5, SHA1 ⁷ R @@ -9351,7 +9845,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total W - - + PikPak MD5 R @@ -9360,7 +9854,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + premiumize.me - - @@ -9369,7 +9863,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + put.io CRC-32 R/W @@ -9378,7 +9872,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + Proton Drive SHA1 R/W @@ -9387,7 +9881,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + QingStor MD5 - ⁹ @@ -9396,7 +9890,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R/W - - + Quatrix by Maytech - R/W @@ -9405,7 +9899,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Seafile - - @@ -9414,16 +9908,16 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + SFTP MD5, SHA1 ² -R/W +DR/W Depends No - - - + Sia - - @@ -9432,7 +9926,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + SMB - R/W @@ -9441,7 +9935,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + SugarSync - - @@ -9450,7 +9944,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Storj - R @@ -9459,7 +9953,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Uptobox - - @@ -9468,7 +9962,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + WebDAV MD5, SHA1 ³ R ⁴ @@ -9477,7 +9971,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + Yandex Disk MD5 R/W @@ -9486,7 +9980,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total R - - + Zoho WorkDrive - - @@ -9495,14 +9989,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - - - + The local filesystem All -R/W +DR/W Depends No - -RWU +DRWU @@ -9522,10 +10016,45 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.

    To use the verify checksums when transferring between cloud storage systems they must support a common hash type.

    ModTime

    -

    Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represent the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.

    +

    Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represents the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.

    + ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    KeyExplanation
    -ModTimes not supported - times likely the upload time
    RModTimes supported on files but can't be changed without re-upload
    R/WRead and Write ModTimes fully supported on files
    DRModTimes supported on files and directories but can't be changed without re-upload
    DR/WRead and Write ModTimes fully supported on files and directories

    Storage systems with a - in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. It is most likely the time the file was uploaded, or possibly something else (like the time the picture was taken in Google Photos).

    Storage systems with a R (for read-only) in the ModTime column, means the it keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time (SetModTime operation) without re-uploading, possibly not even without deleting existing first. Some operations in rclone, such as copy and sync commands, will automatically check for SetModTime support and re-upload if necessary to keep the modification times in sync. Other commands will not work without SetModTime support, e.g. touch command on an existing file will fail, and changes to modification time only on a files in a mount will be silently ignored.

    Storage systems with R/W (for read/write) in the ModTime column, means they do also support modtime-only operations.

    +

    Storage systems with D in the ModTime column means that the following symbols apply to directories as well as files.

    Case Insensitive

    If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, e.g. file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.

    This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.

    @@ -9947,6 +10476,10 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total

    Backends may or may support reading or writing metadata. They may support reading and writing system metadata (metadata intrinsic to that backend) and/or user metadata (general purpose metadata).

    The levels of metadata support are

    ++++ @@ -9956,15 +10489,27 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total - + - + - + + + + + + + + + + + + +
    Key
    RRead only System MetadataRead only System Metadata on files only
    RWRead and write System MetadataRead and write System Metadata on files only
    RWURead and write System Metadata and read and write User MetadataRead and write System Metadata and read and write User Metadata on files only
    DRRead only System Metadata on files and directories
    DRWRead and write System Metadata on files and directories
    DRWURead and write System Metadata and read and write User Metadata on files and directories
    @@ -10032,20 +10577,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes -Amazon Drive -Yes -No -Yes -Yes -No -No -No -No -No -No -Yes - - Amazon S3 (or S3 compatible) No Yes @@ -10059,7 +10590,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No No - + Backblaze B2 No Yes @@ -10073,7 +10604,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No No - + Box Yes Yes @@ -10087,7 +10618,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes Yes - + Citrix ShareFile Yes Yes @@ -10101,7 +10632,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes - + Dropbox Yes Yes @@ -10115,7 +10646,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes Yes - + Enterprise File Fabric Yes Yes @@ -10129,7 +10660,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes - + FTP No No @@ -10143,7 +10674,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes - + Google Cloud Storage Yes Yes @@ -10157,7 +10688,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No No - + Google Drive Yes Yes @@ -10171,7 +10702,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes Yes - + Google Photos No No @@ -10185,7 +10716,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No No - + HDFS Yes No @@ -10199,7 +10730,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total Yes Yes - + HiDrive Yes Yes @@ -10213,7 +10744,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes - + HTTP No No @@ -10227,6 +10758,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total No Yes + +ImageKit +Yes +Yes +Yes +No +No +No +No +No +No +No +Yes + Internet Archive No @@ -10635,7 +11180,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total The local filesystem -Yes +No No Yes Yes @@ -10696,7 +11241,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -10710,6 +11255,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -10724,6 +11270,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -10761,7 +11308,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0")

    Performance

    Flags helpful for increasing performance.

          --buffer-size SizeSuffix   In memory buffer size when reading files for each --transfer (default 16Mi)
    @@ -10891,14 +11438,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --rc-web-gui-update                  Check and update to latest version of web gui

    Backend

    Backend only flags. These can be set in the config file also.

    -
          --acd-auth-url string                                 Auth server URL
    -      --acd-client-id string                                OAuth Client Id
    -      --acd-client-secret string                            OAuth Client Secret
    -      --acd-encoding Encoding                               The encoding for the backend (default Slash,InvalidUtf8,Dot)
    -      --acd-templink-threshold SizeSuffix                   Files >= this size will be downloaded via their tempLink (default 9Gi)
    -      --acd-token string                                    OAuth Access Token as a JSON blob
    -      --acd-token-url string                                Token server url
    -      --acd-upload-wait-per-gb Duration                     Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
    +
          --alias-description string                            Description of the remote
           --alias-remote string                                 Remote or path to alias
           --azureblob-access-tier string                        Access tier of blob: hot, cool, cold or archive
           --azureblob-account string                            Azure Storage Account Name
    @@ -10909,6 +11449,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --azureblob-client-id string                          The ID of the client in use
           --azureblob-client-secret string                      One of the service principal's client secrets
           --azureblob-client-send-certificate-chain             Send the certificate chain when using certificate auth
    +      --azureblob-delete-snapshots string                   Set to specify how to deal with snapshots on blob deletion
    +      --azureblob-description string                        Description of the remote
           --azureblob-directory-markers                         Upload an empty object with a trailing slash when a new directory is created
           --azureblob-disable-checksum                          Don't store MD5 checksum with object metadata
           --azureblob-encoding Encoding                         The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
    @@ -10939,6 +11481,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --azurefiles-client-secret string                     One of the service principal's client secrets
           --azurefiles-client-send-certificate-chain            Send the certificate chain when using certificate auth
           --azurefiles-connection-string string                 Azure Files Connection String
    +      --azurefiles-description string                       Description of the remote
           --azurefiles-encoding Encoding                        The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
           --azurefiles-endpoint string                          Endpoint for the service
           --azurefiles-env-auth                                 Read credentials from runtime (environment variables, CLI or MSI)
    @@ -10958,8 +11501,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --b2-account string                                   Account ID or Application Key ID
           --b2-chunk-size SizeSuffix                            Upload chunk size (default 96Mi)
           --b2-copy-cutoff SizeSuffix                           Cutoff for switching to multipart copy (default 4Gi)
    +      --b2-description string                               Description of the remote
           --b2-disable-checksum                                 Disable checksums for large (> upload cutoff) files
    -      --b2-download-auth-duration Duration                  Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
    +      --b2-download-auth-duration Duration                  Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
           --b2-download-url string                              Custom endpoint for downloads
           --b2-encoding Encoding                                The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
           --b2-endpoint string                                  Endpoint for the service
    @@ -10978,6 +11522,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --box-client-id string                                OAuth Client Id
           --box-client-secret string                            OAuth Client Secret
           --box-commit-retries int                              Max number of times to try committing a multipart file (default 100)
    +      --box-description string                              Description of the remote
           --box-encoding Encoding                               The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
           --box-impersonate string                              Impersonate this user ID when using a service account
           --box-list-chunk int                                  Size of listing chunk 1-1000 (default 1000)
    @@ -10994,6 +11539,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --cache-db-path string                                Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend")
           --cache-db-purge                                      Clear all the cached data for this remote on start
           --cache-db-wait-time Duration                         How long to wait for the DB to be available - 0 is unlimited (default 1s)
    +      --cache-description string                            Description of the remote
           --cache-info-age Duration                             How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s)
           --cache-plex-insecure string                          Skip all certificate verification when connecting to the Plex server
           --cache-plex-password string                          The password of the Plex user (obscured)
    @@ -11007,15 +11553,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --cache-workers int                                   How many workers should run in parallel to download chunks (default 4)
           --cache-writes                                        Cache file data on writes through the FS
           --chunker-chunk-size SizeSuffix                       Files larger than chunk size will be split in chunks (default 2Gi)
    +      --chunker-description string                          Description of the remote
           --chunker-fail-hard                                   Choose how chunker should handle files with missing or invalid chunks
           --chunker-hash-type string                            Choose how chunker handles hash sums (default "md5")
           --chunker-remote string                               Remote to chunk/unchunk
    +      --combine-description string                          Description of the remote
           --combine-upstreams SpaceSepList                      Upstreams for combining
    +      --compress-description string                         Description of the remote
           --compress-level int                                  GZIP compression level (-2 to 9) (default -1)
           --compress-mode string                                Compression mode (default "gzip")
           --compress-ram-cache-limit SizeSuffix                 Some remotes don't allow the upload of files with unknown size (default 20Mi)
           --compress-remote string                              Remote to compress
       -L, --copy-links                                          Follow symlinks and copy the pointed to item
    +      --crypt-description string                            Description of the remote
           --crypt-directory-name-encryption                     Option to either encrypt directory names or leave them intact (default true)
           --crypt-filename-encoding string                      How to encode the encrypted filename to text string (default "base32")
           --crypt-filename-encryption string                    How to encrypt the filenames (default "standard")
    @@ -11026,6 +11576,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --crypt-remote string                                 Remote to encrypt/decrypt
           --crypt-server-side-across-configs                    Deprecated: use --server-side-across-configs instead
           --crypt-show-mapping                                  For all files listed show how the names encrypt
    +      --crypt-strict-names                                  If set, this will raise an error when crypt comes across a filename that can't be decrypted
           --crypt-suffix string                                 If this is set it will override the default suffix of ".bin" (default ".bin")
           --drive-acknowledge-abuse                             Set to allow files which return cannotDownloadAbusiveFile to be downloaded
           --drive-allow-import-name-change                      Allow the filetype to change when uploading Google docs
    @@ -11035,6 +11586,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --drive-client-id string                              Google Application Client Id
           --drive-client-secret string                          OAuth Client Secret
           --drive-copy-shortcut-content                         Server side copy contents of shortcuts instead of the shortcut
    +      --drive-description string                            Description of the remote
           --drive-disable-http2                                 Disable drive using http2 (default true)
           --drive-encoding Encoding                             The encoding for the backend (default InvalidUtf8)
           --drive-env-auth                                      Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
    @@ -11083,6 +11635,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --dropbox-chunk-size SizeSuffix                       Upload chunk size (< 150Mi) (default 48Mi)
           --dropbox-client-id string                            OAuth Client Id
           --dropbox-client-secret string                        OAuth Client Secret
    +      --dropbox-description string                          Description of the remote
           --dropbox-encoding Encoding                           The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
           --dropbox-impersonate string                          Impersonate this user when using a business account
           --dropbox-pacer-min-sleep Duration                    Minimum time to sleep between API calls (default 10ms)
    @@ -11092,10 +11645,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --dropbox-token-url string                            Token server url
           --fichier-api-key string                              Your API Key, get it from https://1fichier.com/console/params.pl
           --fichier-cdn                                         Set if you wish to use CDN download links
    +      --fichier-description string                          Description of the remote
           --fichier-encoding Encoding                           The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
           --fichier-file-password string                        If you want to download a shared file that is password protected, add this parameter (obscured)
           --fichier-folder-password string                      If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
           --fichier-shared-folder string                        If you want to download a shared folder, add this parameter
    +      --filefabric-description string                       Description of the remote
           --filefabric-encoding Encoding                        The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
           --filefabric-permanent-token string                   Permanent Authentication Token
           --filefabric-root-folder-id string                    ID of the root folder
    @@ -11106,6 +11661,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --ftp-ask-password                                    Allow asking for FTP password when needed
           --ftp-close-timeout Duration                          Maximum time to wait for a response to close (default 1m0s)
           --ftp-concurrency int                                 Maximum number of FTP simultaneous connections, 0 for unlimited
    +      --ftp-description string                              Description of the remote
           --ftp-disable-epsv                                    Disable using EPSV even if server advertises support
           --ftp-disable-mlsd                                    Disable using MLSD even if server advertises support
           --ftp-disable-tls13                                   Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
    @@ -11131,6 +11687,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --gcs-client-id string                                OAuth Client Id
           --gcs-client-secret string                            OAuth Client Secret
           --gcs-decompress                                      If set this will decompress gzip encoded objects
    +      --gcs-description string                              Description of the remote
           --gcs-directory-markers                               Upload an empty object with a trailing slash when a new directory is created
           --gcs-encoding Encoding                               The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
           --gcs-endpoint string                                 Endpoint for the service
    @@ -11151,6 +11708,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --gphotos-batch-timeout Duration                      Max time to allow an idle upload batch before uploading (default 0s)
           --gphotos-client-id string                            OAuth Client Id
           --gphotos-client-secret string                        OAuth Client Secret
    +      --gphotos-description string                          Description of the remote
           --gphotos-encoding Encoding                           The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
           --gphotos-include-archived                            Also view and download archived media
           --gphotos-read-only                                   Set to make the Google Photos backend read only
    @@ -11159,10 +11717,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --gphotos-token string                                OAuth Access Token as a JSON blob
           --gphotos-token-url string                            Token server url
           --hasher-auto-size SizeSuffix                         Auto-update checksum for files smaller than this size (disabled by default)
    +      --hasher-description string                           Description of the remote
           --hasher-hashes CommaSepList                          Comma separated list of supported checksum types (default md5,sha1)
           --hasher-max-age Duration                             Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
           --hasher-remote string                                Remote to cache checksums for (e.g. myRemote:path)
           --hdfs-data-transfer-protection string                Kerberos data transfer protection: authentication|integrity|privacy
    +      --hdfs-description string                             Description of the remote
           --hdfs-encoding Encoding                              The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
           --hdfs-namenode CommaSepList                          Hadoop name nodes and ports
           --hdfs-service-principal-name string                  Kerberos service principal name for the namenode
    @@ -11171,6 +11731,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --hidrive-chunk-size SizeSuffix                       Chunksize for chunked uploads (default 48Mi)
           --hidrive-client-id string                            OAuth Client Id
           --hidrive-client-secret string                        OAuth Client Secret
    +      --hidrive-description string                          Description of the remote
           --hidrive-disable-fetching-member-count               Do not fetch number of objects in directories unless it is absolutely necessary
           --hidrive-encoding Encoding                           The encoding for the backend (default Slash,Dot)
           --hidrive-endpoint string                             Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
    @@ -11181,10 +11742,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --hidrive-token-url string                            Token server url
           --hidrive-upload-concurrency int                      Concurrency for chunked uploads (default 4)
           --hidrive-upload-cutoff SizeSuffix                    Cutoff/Threshold for chunked uploads (default 96Mi)
    +      --http-description string                             Description of the remote
           --http-headers CommaSepList                           Set HTTP headers for all transactions
           --http-no-head                                        Don't use HEAD requests
           --http-no-slash                                       Set this if the site doesn't end directories with /
           --http-url string                                     URL of HTTP host to connect to
    +      --imagekit-description string                         Description of the remote
           --imagekit-encoding Encoding                          The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
           --imagekit-endpoint string                            You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
           --imagekit-only-signed Restrict unsigned image URLs   If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
    @@ -11193,6 +11756,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --imagekit-upload-tags string                         Tags to add to the uploaded files, e.g. "tag1,tag2"
           --imagekit-versions                                   Include old versions in directory listings
           --internetarchive-access-key-id string                IAS3 Access Key
    +      --internetarchive-description string                  Description of the remote
           --internetarchive-disable-checksum                    Don't ask the server to test against MD5 checksum calculated by rclone (default true)
           --internetarchive-encoding Encoding                   The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
           --internetarchive-endpoint string                     IAS3 Endpoint (default "https://s3.us.archive.org")
    @@ -11202,6 +11766,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --jottacloud-auth-url string                          Auth server URL
           --jottacloud-client-id string                         OAuth Client Id
           --jottacloud-client-secret string                     OAuth Client Secret
    +      --jottacloud-description string                       Description of the remote
           --jottacloud-encoding Encoding                        The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
           --jottacloud-hard-delete                              Delete files permanently rather than putting them into the trash
           --jottacloud-md5-memory-limit SizeSuffix              Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
    @@ -11210,6 +11775,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --jottacloud-token-url string                         Token server url
           --jottacloud-trashed-only                             Only show files that are in the trash
           --jottacloud-upload-resume-limit SizeSuffix           Files bigger than this can be resumed if the upload fail's (default 10Mi)
    +      --koofr-description string                            Description of the remote
           --koofr-encoding Encoding                             The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
           --koofr-endpoint string                               The Koofr API endpoint to use
           --koofr-mountid string                                Mount ID of the mount to use
    @@ -11217,10 +11783,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --koofr-provider string                               Choose your storage provider
           --koofr-setmtime                                      Does the backend support setting modification time (default true)
           --koofr-user string                                   Your user name
    +      --linkbox-description string                          Description of the remote
           --linkbox-token string                                Token from https://www.linkbox.to/admin/account
       -l, --links                                               Translate symlinks to/from regular files with a '.rclonelink' extension
           --local-case-insensitive                              Force the filesystem to report itself as case insensitive
           --local-case-sensitive                                Force the filesystem to report itself as case sensitive
    +      --local-description string                            Description of the remote
           --local-encoding Encoding                             The encoding for the backend (default Slash,Dot)
           --local-no-check-updated                              Don't check to see if the files change during upload
           --local-no-preallocate                                Disable preallocation of disk space for transferred files
    @@ -11233,6 +11801,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --mailru-check-hash                                   What should copy do if file checksum is mismatched or invalid (default true)
           --mailru-client-id string                             OAuth Client Id
           --mailru-client-secret string                         OAuth Client Secret
    +      --mailru-description string                           Description of the remote
           --mailru-encoding Encoding                            The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
           --mailru-pass string                                  Password (obscured)
           --mailru-speedup-enable                               Skip full upload if there is another file with same data hash (default true)
    @@ -11243,12 +11812,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --mailru-token-url string                             Token server url
           --mailru-user string                                  User name (usually email)
           --mega-debug                                          Output more debug from Mega
    +      --mega-description string                             Description of the remote
           --mega-encoding Encoding                              The encoding for the backend (default Slash,InvalidUtf8,Dot)
           --mega-hard-delete                                    Delete files permanently rather than putting them into the trash
           --mega-pass string                                    Password (obscured)
           --mega-use-https                                      Use HTTPS for transfers
           --mega-user string                                    User name
    +      --memory-description string                           Description of the remote
           --netstorage-account string                           Set the NetStorage account name
    +      --netstorage-description string                       Description of the remote
           --netstorage-host string                              Domain+path of NetStorage host to connect to
           --netstorage-protocol string                          Select between HTTP or HTTPS protocol (default "https")
           --netstorage-secret string                            Set the NetStorage account secret/G2O key for authentication (obscured)
    @@ -11260,6 +11832,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --onedrive-client-id string                           OAuth Client Id
           --onedrive-client-secret string                       OAuth Client Secret
           --onedrive-delta                                      If set rclone will use delta listing to implement recursive listings
    +      --onedrive-description string                         Description of the remote
           --onedrive-drive-id string                            The ID of the drive to use
           --onedrive-drive-type string                          The type of the drive (personal | business | documentLibrary)
           --onedrive-encoding Encoding                          The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
    @@ -11269,6 +11842,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --onedrive-link-scope string                          Set the scope of the links created by the link command (default "anonymous")
           --onedrive-link-type string                           Set the type of the links created by the link command (default "view")
           --onedrive-list-chunk int                             Size of listing chunk (default 1000)
    +      --onedrive-metadata-permissions Bits                  Control whether permissions should be read or written in metadata (default off)
           --onedrive-no-versions                                Remove all versions on modifying operations
           --onedrive-region string                              Choose national cloud region for OneDrive (default "global")
           --onedrive-root-folder-id string                      ID of the root folder
    @@ -11282,6 +11856,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --oos-config-profile string                           Profile name inside the oci config file (default "Default")
           --oos-copy-cutoff SizeSuffix                          Cutoff for switching to multipart copy (default 4.656Gi)
           --oos-copy-timeout Duration                           Timeout for copy (default 1m0s)
    +      --oos-description string                              Description of the remote
           --oos-disable-checksum                                Don't store MD5 checksum with object metadata
           --oos-encoding Encoding                               The encoding for the backend (default Slash,InvalidUtf8,Dot)
           --oos-endpoint string                                 Endpoint for Object storage API
    @@ -11300,12 +11875,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --oos-upload-concurrency int                          Concurrency for multipart uploads (default 10)
           --oos-upload-cutoff SizeSuffix                        Cutoff for switching to chunked upload (default 200Mi)
           --opendrive-chunk-size SizeSuffix                     Files will be uploaded in chunks this size (default 10Mi)
    +      --opendrive-description string                        Description of the remote
           --opendrive-encoding Encoding                         The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
           --opendrive-password string                           Password (obscured)
           --opendrive-username string                           Username
           --pcloud-auth-url string                              Auth server URL
           --pcloud-client-id string                             OAuth Client Id
           --pcloud-client-secret string                         OAuth Client Secret
    +      --pcloud-description string                           Description of the remote
           --pcloud-encoding Encoding                            The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
           --pcloud-hostname string                              Hostname to connect to (default "api.pcloud.com")
           --pcloud-password string                              Your pcloud password (obscured)
    @@ -11316,6 +11893,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --pikpak-auth-url string                              Auth server URL
           --pikpak-client-id string                             OAuth Client Id
           --pikpak-client-secret string                         OAuth Client Secret
    +      --pikpak-description string                           Description of the remote
           --pikpak-encoding Encoding                            The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
           --pikpak-hash-memory-limit SizeSuffix                 Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
           --pikpak-pass string                                  Pikpak password (obscured)
    @@ -11328,11 +11906,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --premiumizeme-auth-url string                        Auth server URL
           --premiumizeme-client-id string                       OAuth Client Id
           --premiumizeme-client-secret string                   OAuth Client Secret
    +      --premiumizeme-description string                     Description of the remote
           --premiumizeme-encoding Encoding                      The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
           --premiumizeme-token string                           OAuth Access Token as a JSON blob
           --premiumizeme-token-url string                       Token server url
           --protondrive-2fa string                              The 2FA code
           --protondrive-app-version string                      The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
    +      --protondrive-description string                      Description of the remote
           --protondrive-enable-caching                          Caches the files and folders metadata to reduce API calls (default true)
           --protondrive-encoding Encoding                       The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
           --protondrive-mailbox-password string                 The mailbox password of your two-password proton account (obscured)
    @@ -11343,12 +11923,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --putio-auth-url string                               Auth server URL
           --putio-client-id string                              OAuth Client Id
           --putio-client-secret string                          OAuth Client Secret
    +      --putio-description string                            Description of the remote
           --putio-encoding Encoding                             The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
           --putio-token string                                  OAuth Access Token as a JSON blob
           --putio-token-url string                              Token server url
           --qingstor-access-key-id string                       QingStor Access Key ID
           --qingstor-chunk-size SizeSuffix                      Chunk size to use for uploading (default 4Mi)
           --qingstor-connection-retries int                     Number of connection retries (default 3)
    +      --qingstor-description string                         Description of the remote
           --qingstor-encoding Encoding                          The encoding for the backend (default Slash,Ctl,InvalidUtf8)
           --qingstor-endpoint string                            Enter an endpoint URL to connection QingStor API
           --qingstor-env-auth                                   Get QingStor credentials from runtime
    @@ -11357,18 +11939,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --qingstor-upload-cutoff SizeSuffix                   Cutoff for switching to chunked upload (default 200Mi)
           --qingstor-zone string                                Zone to connect to
           --quatrix-api-key string                              API key for accessing Quatrix account
    +      --quatrix-description string                          Description of the remote
           --quatrix-effective-upload-time string                Wanted upload time for one chunk (default "4s")
           --quatrix-encoding Encoding                           The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
           --quatrix-hard-delete                                 Delete files permanently rather than putting them into the trash
           --quatrix-host string                                 Host name of Quatrix account
           --quatrix-maximal-summary-chunk-size SizeSuffix       The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
           --quatrix-minimal-chunk-size SizeSuffix               The minimal size for one chunk (default 9.537Mi)
    +      --quatrix-skip-project-folders                        Skip project folders in operations
           --s3-access-key-id string                             AWS Access Key ID
           --s3-acl string                                       Canned ACL used when creating buckets and storing or copying objects
           --s3-bucket-acl string                                Canned ACL used when creating buckets
           --s3-chunk-size SizeSuffix                            Chunk size to use for uploading (default 5Mi)
           --s3-copy-cutoff SizeSuffix                           Cutoff for switching to multipart copy (default 4.656Gi)
           --s3-decompress                                       If set this will decompress gzip encoded objects
    +      --s3-description string                               Description of the remote
           --s3-directory-markers                                Upload an empty object with a trailing slash when a new directory is created
           --s3-disable-checksum                                 Don't store MD5 checksum with object metadata
           --s3-disable-http2                                    Disable usage of http2 for S3 backends
    @@ -11403,19 +11988,22 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --s3-sse-kms-key-id string                            If using KMS ID you must provide the ARN of Key
           --s3-storage-class string                             The storage class to use when storing new objects in S3
           --s3-sts-endpoint string                              Endpoint for STS
    -      --s3-upload-concurrency int                           Concurrency for multipart uploads (default 4)
    +      --s3-upload-concurrency int                           Concurrency for multipart uploads and copies (default 4)
           --s3-upload-cutoff SizeSuffix                         Cutoff for switching to chunked upload (default 200Mi)
           --s3-use-accelerate-endpoint                          If true use the AWS S3 accelerated endpoint
           --s3-use-accept-encoding-gzip Accept-Encoding: gzip   Whether to send Accept-Encoding: gzip header (default unset)
           --s3-use-already-exists Tristate                      Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
    +      --s3-use-dual-stack                                   If true use AWS S3 dual-stack endpoint (IPv6 support)
           --s3-use-multipart-etag Tristate                      Whether to use ETag in multipart uploads for verification (default unset)
           --s3-use-multipart-uploads Tristate                   Set if rclone should use multipart uploads (default unset)
           --s3-use-presigned-request                            Whether to use a presigned request or PutObject for single part uploads
           --s3-v2-auth                                          If true use v2 authentication
           --s3-version-at Time                                  Show file versions as they were at the specified time (default off)
    +      --s3-version-deleted                                  Show deleted file markers when using versions
           --s3-versions                                         Include old versions in directory listings
           --seafile-2fa                                         Two-factor authentication ('true' if the account has 2FA enabled)
           --seafile-create-library                              Should rclone create a library if it doesn't exist
    +      --seafile-description string                          Description of the remote
           --seafile-encoding Encoding                           The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
           --seafile-library string                              Name of the library
           --seafile-library-key string                          Library password (for encrypted libraries only) (obscured)
    @@ -11427,6 +12015,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --sftp-ciphers SpaceSepList                           Space separated list of ciphers to be used for session encryption, ordered by preference
           --sftp-concurrency int                                The maximum number of outstanding requests for one file (default 64)
           --sftp-copy-is-hardlink                               Set to enable server side copies using hardlinks
    +      --sftp-description string                             Description of the remote
           --sftp-disable-concurrent-reads                       If set don't use concurrent reads
           --sftp-disable-concurrent-writes                      If set don't use concurrent writes
           --sftp-disable-hashcheck                              Disable the execution of SSH commands to determine if remote file hashing is available
    @@ -11461,6 +12050,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --sharefile-chunk-size SizeSuffix                     Upload chunk size (default 64Mi)
           --sharefile-client-id string                          OAuth Client Id
           --sharefile-client-secret string                      OAuth Client Secret
    +      --sharefile-description string                        Description of the remote
           --sharefile-encoding Encoding                         The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
           --sharefile-endpoint string                           Endpoint for API calls
           --sharefile-root-folder-id string                     ID of the root folder
    @@ -11469,10 +12059,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --sharefile-upload-cutoff SizeSuffix                  Cutoff for switching to multipart upload (default 128Mi)
           --sia-api-password string                             Sia Daemon API Password (obscured)
           --sia-api-url string                                  Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
    +      --sia-description string                              Description of the remote
           --sia-encoding Encoding                               The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
           --sia-user-agent string                               Siad User Agent (default "Sia-Agent")
           --skip-links                                          Don't warn about skipped symlinks
           --smb-case-insensitive                                Whether the server is configured to be case-insensitive (default true)
    +      --smb-description string                              Description of the remote
           --smb-domain string                                   Domain name for NTLM authentication (default "WORKGROUP")
           --smb-encoding Encoding                               The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
           --smb-hide-special-share                              Hide special shares (e.g. print$) which users aren't supposed to access (default true)
    @@ -11484,6 +12076,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --smb-user string                                     SMB username (default "$USER")
           --storj-access-grant string                           Access grant
           --storj-api-key string                                API key
    +      --storj-description string                            Description of the remote
           --storj-passphrase string                             Encryption passphrase
           --storj-provider string                               Choose an authentication method (default "existing")
           --storj-satellite-address string                      Satellite address (default "us1.storj.io")
    @@ -11492,6 +12085,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --sugarsync-authorization string                      Sugarsync authorization
           --sugarsync-authorization-expiry string               Sugarsync authorization expiry
           --sugarsync-deleted-id string                         Sugarsync deleted folder id
    +      --sugarsync-description string                        Description of the remote
           --sugarsync-encoding Encoding                         The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
           --sugarsync-hard-delete                               Permanently delete files if true
           --sugarsync-private-access-key string                 Sugarsync Private Access Key
    @@ -11505,6 +12099,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --swift-auth-token string                             Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
           --swift-auth-version int                              AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
           --swift-chunk-size SizeSuffix                         Above this size files will be chunked into a _segments container (default 5Gi)
    +      --swift-description string                            Description of the remote
           --swift-domain string                                 User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
           --swift-encoding Encoding                             The encoding for the backend (default Slash,InvalidUtf8)
           --swift-endpoint-type string                          Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
    @@ -11524,17 +12119,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --union-action-policy string                          Policy to choose upstream on ACTION category (default "epall")
           --union-cache-time int                                Cache time of usage and free space (in seconds) (default 120)
           --union-create-policy string                          Policy to choose upstream on CREATE category (default "epmfs")
    +      --union-description string                            Description of the remote
           --union-min-free-space SizeSuffix                     Minimum viable free space for lfs/eplfs policies (default 1Gi)
           --union-search-policy string                          Policy to choose upstream on SEARCH category (default "ff")
           --union-upstreams string                              List of space separated upstreams
           --uptobox-access-token string                         Your access token
    +      --uptobox-description string                          Description of the remote
           --uptobox-encoding Encoding                           The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
           --uptobox-private                                     Set to make uploaded files private
           --webdav-bearer-token string                          Bearer token instead of user/pass (e.g. a Macaroon)
           --webdav-bearer-token-command string                  Command to run to get a bearer token
    +      --webdav-description string                           Description of the remote
           --webdav-encoding string                              The encoding for the backend
           --webdav-headers CommaSepList                         Set HTTP headers for all transactions
           --webdav-nextcloud-chunk-size SizeSuffix              Nextcloud upload chunk size (default 10Mi)
    +      --webdav-owncloud-exclude-shares                      Exclude ownCloud shares
           --webdav-pacer-min-sleep Duration                     Minimum time to sleep between API calls (default 10ms)
           --webdav-pass string                                  Password (obscured)
           --webdav-url string                                   URL of http host to connect to
    @@ -11543,6 +12142,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --yandex-auth-url string                              Auth server URL
           --yandex-client-id string                             OAuth Client Id
           --yandex-client-secret string                         OAuth Client Secret
    +      --yandex-description string                           Description of the remote
           --yandex-encoding Encoding                            The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
           --yandex-hard-delete                                  Delete files permanently rather than putting them into the trash
           --yandex-token string                                 OAuth Access Token as a JSON blob
    @@ -11550,6 +12150,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
           --zoho-auth-url string                                Auth server URL
           --zoho-client-id string                               OAuth Client Id
           --zoho-client-secret string                           OAuth Client Secret
    +      --zoho-description string                             Description of the remote
           --zoho-encoding Encoding                              The encoding for the backend (default Del,Ctl,InvalidUtf8)
           --zoho-region string                                  Zoho region to connect to
           --zoho-token string                                   OAuth Access Token as a JSON blob
    @@ -11735,16 +12336,21 @@ docker volume create my_vol -d rclone -o opt1=new_val1 ...
    docker volume list
     docker volume inspect my_vol

    If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first.

    +

    Bisync

    +

    bisync is in beta and is considered an advanced command, so use with care. Make sure you have read and understood the entire manual (especially the Limitations section) before using, or data loss can result. Questions can be asked in the Rclone Forum.

    Getting started

    • Install rclone and setup your remotes.
    • -
    • Bisync will create its working directory at ~/.cache/rclone/bisync on Linux or C:\Users\MyLogin\AppData\Local\rclone\bisync on Windows. Make sure that this location is writable.
    • +
    • Bisync will create its working directory at ~/.cache/rclone/bisync on Linux, /Users/yourusername/Library/Caches/rclone/bisync on Mac, or C:\Users\MyLogin\AppData\Local\rclone\bisync on Windows. Make sure that this location is writable.
    • Run bisync with the --resync flag, specifying the paths to the local and remote sync directory roots.
    • -
    • For successive sync runs, leave off the --resync flag.
    • +
    • For successive sync runs, leave off the --resync flag. (Important!)
    • Consider using a filters file for excluding unnecessary files and directories from the sync.
    • Consider setting up the --check-access feature for safety.
    • -
    • On Linux, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains.
    • +
    • On Linux or Mac, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains.
    +

    For example, your first command might look like this:

    +
    rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run
    +

    If all looks good, run it again without --dry-run. After that, remove --resync as well.

    Here is a typical run log (with timestamps removed for clarity):

    rclone bisync /testdir/path1/ /testdir/path2/ --verbose
     INFO  : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
    @@ -11797,36 +12403,36 @@ Positional arguments:
                     Type 'rclone listremotes' for list of configured remotes.
     
     Optional Flags:
    -      --check-access            Ensure expected `RCLONE_TEST` files are found on
    -                                both Path1 and Path2 filesystems, else abort.
    -      --check-filename FILENAME Filename for `--check-access` (default: `RCLONE_TEST`)
    -      --check-sync CHOICE       Controls comparison of final listings:
    -                                `true | false | only` (default: true)
    -                                If set to `only`, bisync will only compare listings
    -                                from the last run but skip actual sync.
    -      --filters-file PATH       Read filtering patterns from a file
    -      --max-delete PERCENT      Safety check on maximum percentage of deleted files allowed.
    -                                If exceeded, the bisync run will abort. (default: 50%)
    -      --force                   Bypass `--max-delete` safety check and run the sync.
    -                                Consider using with `--verbose`
    -      --create-empty-src-dirs   Sync creation and deletion of empty directories. 
    -                                  (Not compatible with --remove-empty-dirs)
    -      --remove-empty-dirs       Remove empty directories at the final cleanup step.
    -  -1, --resync                  Performs the resync run.
    -                                Warning: Path1 files may overwrite Path2 versions.
    -                                Consider using `--verbose` or `--dry-run` first.
    -      --ignore-listing-checksum Do not use checksums for listings 
    -                                  (add --ignore-checksum to additionally skip post-copy checksum checks)
    -      --resilient               Allow future runs to retry after certain less-serious errors, 
    -                                  instead of requiring --resync. Use at your own risk!
    -      --localtime               Use local time in listings (default: UTC)
    -      --no-cleanup              Retain working files (useful for troubleshooting and testing).
    -      --workdir PATH            Use custom working directory (useful for testing).
    -                                (default: `~/.cache/rclone/bisync`)
    -  -n, --dry-run                 Go through the motions - No files are copied/deleted.
    -  -v, --verbose                 Increases logging verbosity.
    -                                May be specified more than once for more details.
    -  -h, --help                    help for bisync
    + --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote. + --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote. + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --compare string Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime') + --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num) + --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none") + --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict') + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --no-slow-hash Ignore listing checksums only on backends where they are slow + --recover Automatically recover from interruptions without requiring --resync. + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first. + --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none") + --retries int Retry operations this many times if they fail (requires --resilient). (default 3) + --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) + --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls. + --workdir string Use custom working dir - useful for testing. (default: {WORKDIR}) + --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%) + -n, --dry-run Go through the motions - No files are copied/deleted. + -v, --verbose Increases logging verbosity. May be specified more than once for more details.

    Arbitrary rclone flags may be specified on the bisync command line, for example rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s Note that interactions of various rclone flags with bisync process flow has not been fully tested yet.

    Paths

    Path1 and Path2 arguments may be references to any mix of local directory paths (absolute or relative), UNC paths (//server/share/path), Windows drive paths (with a drive letter and :) or configured remotes with optional subdirectory paths. Cloud references are distinguished by having a : in the argument (see Windows support below).

    @@ -11834,50 +12440,153 @@ Optional Flags:

    The listings in bisync working directory (default: ~/.cache/rclone/bisync) are named based on the Path1 and Path2 arguments so that separate syncs to individual directories within the tree may be set up, e.g.: path_to_local_tree..dropbox_subdir.lst.

    Any empty directories after the sync on both the Path1 and Path2 filesystems are not deleted by default, unless --create-empty-src-dirs is specified. If the --remove-empty-dirs flag is specified, then both paths will have ALL empty directories purged as the last step in the process.

    Command-line flags

    -

    --resync

    -

    This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will be copied to Path1, and the process will then copy the Path1 tree to Path2.

    -

    The --resync sequence is roughly equivalent to:

    -
    rclone copy Path2 Path1 --ignore-existing
    -rclone copy Path1 Path2
    -

    Or, if using --create-empty-src-dirs:

    -
    rclone copy Path2 Path1 --ignore-existing
    -rclone copy Path1 Path2 --create-empty-src-dirs
    -rclone copy Path2 Path1 --create-empty-src-dirs
    +

    --resync

    +

    This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. By default, Path2 files that do not exist in Path1 will be copied to Path1, and the process will then copy the Path1 tree to Path2.

    +

    The --resync sequence is roughly equivalent to the following (but see --resync-mode for other options):

    +
    rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs]
    +rclone copy Path1 Path2 [--create-empty-src-dirs]

    The base directories on both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid.

    -

    When using --resync, a newer version of a file on the Path2 filesystem will be overwritten by the Path1 filesystem version. (Note that this is NOT entirely symmetrical.) Carefully evaluate deltas using --dry-run.

    +

    When using --resync, a newer version of a file on the Path2 filesystem will (by default) be overwritten by the Path1 filesystem version. (Note that this is NOT entirely symmetrical, and more symmetrical options can be specified with the --resync-mode flag.) Carefully evaluate deltas using --dry-run.

    For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a normal non-resync run will fail.

    For a non-resync run, either path being empty (no files in the tree) fails with Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst This is a safety check that an unexpected empty path does not result in deleting everything in the other path.

    -

    --check-access

    +

    Note that --resync implies --resync-mode path1 unless a different --resync-mode is explicitly specified. It is not necessary to use both the --resync and --resync-mode flags -- either one is sufficient without the other.

    +

    Note: --resync (including --resync-mode) should only be used under three specific (rare) circumstances: 1. It is your first bisync run (between these two paths) 2. You've just made changes to your bisync settings (such as editing the contents of your --filters-file) 3. There was an error on the prior run, and as a result, bisync now requires --resync to recover

    +

    The rest of the time, you should omit --resync. The reason is because --resync will only copy (not sync) each side to the other. Therefore, if you included --resync for every bisync run, it would never be possible to delete a file -- the deleted file would always keep reappearing at the end of every run (because it's being copied from the other side where it still exists). Similarly, renaming a file would always result in a duplicate copy (both old and new name) on both sides.

    +

    If you find that frequent interruptions from #3 are an issue, rather than automatically running --resync, the recommended alternative is to use the --resilient, --recover, and --conflict-resolve flags, (along with Graceful Shutdown mode, when needed) for a very robust "set-it-and-forget-it" bisync setup that can automatically bounce back from almost any interruption it might encounter. Consider adding something like the following:

    +
    --resilient --recover --max-lock 2m --conflict-resolve newer
    +

    --resync-mode CHOICE

    +

    In the event that a file differs on both sides during a --resync, --resync-mode controls which version will overwrite the other. The supported options are similar to --conflict-resolve. For all of the following options, the version that is kept is referred to as the "winner", and the version that is overwritten (deleted) is referred to as the "loser". The options are named after the "winner":

    + +

    For all of the above options, note the following: - If either of the underlying remotes lacks support for the chosen method, it will be ignored and will fall back to the default of path1. (For example, if --resync-mode newer is set, but one of the paths uses a remote that doesn't support modtime.) - If a winner can't be determined because the chosen method's attribute is missing or equal, it will be ignored, and bisync will instead try to determine whether the files differ by looking at the other --compare methods in effect. (For example, if --resync-mode newer is set, but the Path1 and Path2 modtimes are identical, bisync will compare the sizes.) If bisync concludes that they differ, preference is given to whichever is the "source" at that moment. (In practice, this gives a slight advantage to Path2, as the 2to1 copy comes before the 1to2 copy.) If the files do not differ, nothing is copied (as both sides are already correct). - These options apply only to files that exist on both sides (with the same name and relative path). Files that exist only on one side and not the other are always copied to the other, during --resync (this is one of the main differences between resync and non-resync runs.). - --conflict-resolve, --conflict-loser, and --conflict-suffix do not apply during --resync, and unlike these flags, nothing is renamed during --resync. When a file differs on both sides during --resync, one version always overwrites the other (much like in rclone copy.) (Consider using --backup-dir to retain a backup of the losing version.) - Unlike for --conflict-resolve, --resync-mode none is not a valid option (or rather, it will be interpreted as "no resync", unless --resync has also been specified, in which case it will be ignored.) - Winners and losers are decided at the individual file-level only (there is not currently an option to pick an entire winning directory atomically, although the path1 and path2 options typically produce a similar result.) - To maintain backward-compatibility, the --resync flag implies --resync-mode path1 unless a different --resync-mode is explicitly specified. Similarly, all --resync-mode options (except none) imply --resync, so it is not necessary to use both the --resync and --resync-mode flags simultaneously -- either one is sufficient without the other.

    +

    --check-access

    Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems. RCLONE_TEST files are not generated automatically. For --check-access to succeed, you must first either: A) Place one or more RCLONE_TEST files in both systems, or B) Set --check-filename to a filename already in use in various locations throughout your sync'd fileset. Recommended methods for A) include: * rclone touch Path1/RCLONE_TEST (create a new file) * rclone copyto Path1/RCLONE_TEST Path2/RCLONE_TEST (copy an existing file) * rclone copy Path1/RCLONE_TEST Path2/RCLONE_TEST --include "RCLONE_TEST" (copy multiple files at once, recursively) * create the files manually (outside of rclone) * run bisync once without --check-access to set matching files on both filesystems will also work, but is not preferred, due to potential for user error (you are temporarily disabling the safety feature).

    Note that --check-access is still enforced on --resync, so bisync --resync --check-access will not work as a method of initially setting the files (this is to ensure that bisync can't inadvertently circumvent its own safety switch.)

    Time stamps and file contents for RCLONE_TEST files are not important, just the names and locations. If you have symbolic links in your sync tree it is recommended to place RCLONE_TEST files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. See also the --check-filename flag.

    -

    --check-filename

    +

    --check-filename

    Name of the file(s) used in access health validation. The default --check-filename is RCLONE_TEST. One or more files having this filename must exist, synchronized between your source and destination filesets, in order for --check-access to succeed. See --check-access for additional details.

    -

    --max-delete

    +

    --compare

    +

    As of v1.66, bisync fully supports comparing based on any combination of size, modtime, and checksum (lifting the prior restriction on backends without modtime support.)

    +

    By default (without the --compare flag), bisync inherits the same comparison options as sync (that is: size and modtime by default, unless modified with flags such as --checksum or --size-only.)

    +

    If the --compare flag is set, it will override these defaults. This can be useful if you wish to compare based on combinations not currently supported in sync, such as comparing all three of size AND modtime AND checksum simultaneously (or just modtime AND checksum).

    +

    --compare takes a comma-separated list, with the currently supported values being size, modtime, and checksum. For example, if you want to compare size and checksum, but not modtime, you would do:

    +
    --compare size,checksum
    +

    Or if you want to compare all three:

    +
    --compare size,modtime,checksum
    +

    --compare overrides any conflicting flags. For example, if you set the conflicting flags --compare checksum --size-only, --size-only will be ignored, and bisync will compare checksum and not size. To avoid confusion, it is recommended to use either --compare or the normal sync flags, but not both.

    +

    If --compare includes checksum and both remotes support checksums but have no hash types in common with each other, checksums will be considered only for comparisons within the same side (to determine what has changed since the prior sync), but not for comparisons against the opposite side. If one side supports checksums and the other does not, checksums will only be considered on the side that supports them.

    +

    When comparing with checksum and/or size without modtime, bisync cannot determine whether a file is newer or older -- only whether it is changed or unchanged. (If it is changed on both sides, bisync still does the standard equality-check to avoid declaring a sync conflict unless it absolutely has to.)

    +

    It is recommended to do a --resync when changing --compare settings, as otherwise your prior listing files may not contain the attributes you wish to compare (for example, they will not have stored checksums if you were not previously comparing checksums.)

    +

    --ignore-listing-checksum

    +

    When --checksum or --compare checksum is set, bisync will retrieve (or generate) checksums (for backends that support them) when creating the listings for both paths, and store the checksums in the listing files. --ignore-listing-checksum will disable this behavior, which may speed things up considerably, especially on backends (such as local) where hashes must be computed on the fly instead of retrieved. Please note the following:

    + +

    --no-slow-hash

    +

    On some remotes (notably local), checksums can dramatically slow down a bisync run, because hashes cannot be stored and need to be computed in real-time when they are requested. On other remotes (such as drive), they add practically no time at all. The --no-slow-hash flag will automatically skip checksums on remotes where they are slow, while still comparing them on others (assuming --compare includes checksum.) This can be useful when one of your bisync paths is slow but you still want to check checksums on the other, for a more robust sync.

    +

    --slow-hash-sync-only

    +

    Same as --no-slow-hash, except slow hashes are still considered during sync calls. They are still NOT considered for determining deltas, nor or they included in listings. They are also skipped during --resync. The main use case for this flag is when you have a large number of files, but relatively few of them change from run to run -- so you don't want to check your entire tree every time (it would take too long), but you still want to consider checksums for the smaller group of files for which a modtime or size change was detected. Keep in mind that this speed savings comes with a safety trade-off: if a file's content were to change without a change to its modtime or size, bisync would not detect it, and it would not be synced.

    +

    --slow-hash-sync-only is only useful if both remotes share a common hash type (if they don't, bisync will automatically fall back to --no-slow-hash.) Both --no-slow-hash and --slow-hash-sync-only have no effect without --compare checksum (or --checksum).

    +

    --download-hash

    +

    If --download-hash is set, bisync will use best efforts to obtain an MD5 checksum by downloading and computing on-the-fly, when checksums are not otherwise available (for example, a remote that doesn't support them.) Note that since rclone has to download the entire file, this may dramatically slow down your bisync runs, and is also likely to use a lot of data, so it is probably not practical for bisync paths with a large total file size. However, it can be a good option for syncing small-but-important files with maximum accuracy (for example, a source code repo on a crypt remote.) An additional advantage over methods like cryptcheck is that the original file is not required for comparison (for example, --download-hash can be used to bisync two different crypt remotes with different passwords.)

    +

    When --download-hash is set, bisync still looks for more efficient checksums first, and falls back to downloading only when none are found. It takes priority over conflicting flags such as --no-slow-hash. --download-hash is not suitable for Google Docs and other files of unknown size, as their checksums would change from run to run (due to small variances in the internals of the generated export file.) Therefore, bisync automatically skips --download-hash for files with a size less than 0.

    +

    See also: Hasher backend, cryptcheck command, rclone check --download option, md5sum command

    +

    --max-delete

    As a safety check, if greater than the --max-delete percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete is 50%. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync, either set a different delete percentage limit, e.g. --max-delete 75 (allows up to 75% deletion), or use --force to bypass the check.

    Also see the all files changed check.

    -

    --filters-file

    +

    --filters-file

    By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from documentation. An example filters file contains filters for non-allowed files for synching with Dropbox.

    If you make changes to your filters file then bisync requires a run with --resync. This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.

    To block this from happening, bisync calculates an MD5 hash of the filters file and stores the hash in a .md5 file in the same place as your filters file. On the next run with --filters-file set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash stored in the .md5 file. If they don't match, the run aborts with a critical error and thus forces you to do a --resync, likely avoiding a disaster.

    -

    --check-sync

    +

    --conflict-resolve CHOICE

    +

    In bisync, a "conflict" is a file that is new or changed on both sides (relative to the prior run) AND is not currently identical on both sides. --conflict-resolve controls how bisync handles such a scenario. The currently supported options are:

    + +

    For all of the above options, note the following: - If either of the underlying remotes lacks support for the chosen method, it will be ignored and fall back to none. (For example, if --conflict-resolve newer is set, but one of the paths uses a remote that doesn't support modtime.) - If a winner can't be determined because the chosen method's attribute is missing or equal, it will be ignored and fall back to none. (For example, if --conflict-resolve newer is set, but the Path1 and Path2 modtimes are identical, even if the sizes may differ.) - If the file's content is currently identical on both sides, it is not considered a "conflict", even if new or changed on both sides since the prior sync. (For example, if you made a change on one side and then synced it to the other side by other means.) Therefore, none of the conflict resolution flags apply in this scenario. - The conflict resolution flags do not apply during a --resync, as there is no "prior run" to speak of (but see --resync-mode for similar options.)

    +

    --conflict-loser CHOICE

    +

    --conflict-loser determines what happens to the "loser" of a sync conflict (when --conflict-resolve determines a winner) or to both files (when there is no winner.) The currently supported options are:

    + +

    For all of the above options, note that if a winner cannot be determined (see --conflict-resolve for details on how this could happen), or if --conflict-resolve is not in use, both files will be renamed.

    +

    --conflict-suffix STRING[,STRING]

    +

    --conflict-suffix controls the suffix that is appended when bisync renames a --conflict-loser (default: conflict). --conflict-suffix will accept either one string or two comma-separated strings to assign different suffixes to Path1 vs. Path2. This may be helpful later in identifying the source of the conflict. (For example, --conflict-suffix dropboxconflict,laptopconflict)

    +

    With --conflict-loser num, a number is always appended to the suffix. With --conflict-loser pathname, a number is appended only when one suffix is specified (or when two identical suffixes are specified.) i.e. with --conflict-loser pathname, all of the following would produce exactly the same result:

    +
    --conflict-suffix path
    +--conflict-suffix path,path
    +--conflict-suffix path1,path2
    +

    Suffixes may be as short as 1 character. By default, the suffix is appended after any other extensions (ex. file.jpg.conflict1), however, this can be changed with the --suffix-keep-extension flag (i.e. to instead result in file.conflict1.jpg).

    +

    --conflict-suffix supports several dynamic date variables when enclosed in curly braces as globs. This can be helpful to track the date and/or time that each conflict was handled by bisync. For example:

    +
    --conflict-suffix {DateOnly}-conflict
    +// result: myfile.txt.2006-01-02-conflict1
    +

    All of the formats described here and here are supported, but take care to ensure that your chosen format does not use any characters that are illegal on your remotes (for example, macOS does not allow colons in filenames, and slashes are also best avoided as they are often interpreted as directory separators.) To address this particular issue, an additional {MacFriendlyTime} (or just {mac}) option is supported, which results in 2006-01-02 0304PM.

    +

    Note that --conflict-suffix is entirely separate from rclone's main --sufix flag. This is intentional, as users may wish to use both flags simultaneously, if also using --backup-dir.

    +

    Finally, note that the default in bisync prior to v1.66 was to rename conflicts with ..path1 and ..path2 (with two periods, and path instead of conflict.) Bisync now defaults to a single dot instead of a double dot, but additional dots can be added by including them in the specified suffix string. For example, for behavior equivalent to the previous default, use:

    +
    [--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
    +

    --check-sync

    Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This check-sync integrity check is performed at the end of the sync run by default. Any untrapped failing copy/deletes between the two paths might result in differences between the two listings and in the untracked file content differences between the two paths. A resync run would correct the error.

    Note that the default-enabled integrity check locally executes a load of both the final Path1 and Path2 listings, and thus adds to the run time of a sync. Using --check-sync=false will disable it and may significantly reduce the sync run times for very large numbers of files.

    The check may be run manually with --check-sync=only. It runs only the integrity check and terminates without actually synching.

    -

    See also: Concurrent modifications

    -

    --ignore-listing-checksum

    -

    By default, bisync will retrieve (or generate) checksums (for backends that support them) when creating the listings for both paths, and store the checksums in the listing files. --ignore-listing-checksum will disable this behavior, which may speed things up considerably, especially on backends (such as local) where hashes must be computed on the fly instead of retrieved. Please note the following:

    - -

    --resilient

    +

    Note that currently, --check-sync only checks listing snapshots and NOT the actual files on the remotes. Note also that the listing snapshots will not know about any changes that happened during or after the latest bisync run, as those will be discovered on the next run. Therefore, while listings should always match each other at the end of a bisync run, it is expected that they will not match the underlying remotes, nor will the remotes match each other, if there were changes during or after the run. This is normal, and any differences will be detected and synced on the next run.

    +

    For a robust integrity check of the current state of the remotes (as opposed to just their listing snapshots), consider using check (or cryptcheck, if at least one path is a crypt remote) instead of --check-sync, keeping in mind that differences are expected if files changed during or after your last bisync run.

    +

    For example, a possible sequence could look like this:

    +
      +
    1. Normally scheduled bisync run:
    2. +
    +
    rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
    +
      +
    1. Periodic independent integrity check (perhaps scheduled nightly or weekly):
    2. +
    +
    rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
    +
      +
    1. If diffs are found, you have some choices to correct them. If one side is more up-to-date and you want to make the other side match it, you could run:
    2. +
    +
    rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v
    +

    (or switch Path1 and Path2 to make Path2 the source-of-truth)

    +

    Or, if neither side is totally up-to-date, you could run a --resync to bring them back into agreement (but remember that this could cause deleted files to re-appear.)

    +

    *Note also that rclone check does not currently include empty directories, so if you want to know if any empty directories are out of sync, consider alternatively running the above rclone sync command with --dry-run added.

    +

    See also: Concurrent modifications, --resilient

    +

    --resilient

    Caution: this is an experimental feature. Use at your own risk!

    By default, most errors or interruptions will cause bisync to abort and require --resync to recover. This is a safety feature, to prevent bisync from running again until a user checks things out. However, in some cases, bisync can go too far and enforce a lockout when one isn't actually necessary, like for certain less-serious errors that might resolve themselves on the next run. When --resilient is specified, bisync tries its best to recover and self-correct, and only requires --resync as a last resort when a human's involvement is absolutely necessary. The intended use case is for running bisync as a background process (such as via scheduled cron).

    When using --resilient mode, bisync will still report the error and abort, however it will not lock out future runs -- allowing the possibility of retrying at the next normally scheduled time, without requiring a --resync first. Examples of such retryable errors include access test failures, missing listing files, and filter change detections. These safety features will still prevent the current run from proceeding -- the difference is that if conditions have improved by the time of the next run, that next run will be allowed to proceed. Certain more serious errors will still enforce a --resync lockout, even in --resilient mode, to prevent data loss.

    -

    Behavior of --resilient may change in a future version.

    +

    Behavior of --resilient may change in a future version. (See also: --recover, --max-lock, Graceful Shutdown)

    +

    --recover

    +

    If --recover is set, in the event of a sudden interruption or other un-graceful shutdown, bisync will attempt to automatically recover on the next run, instead of requiring --resync. Bisync is able to recover robustly by keeping one "backup" listing at all times, representing the state of both paths after the last known successful sync. Bisync can then compare the current state with this snapshot to determine which changes it needs to retry. Changes that were synced after this snapshot (during the run that was later interrupted) will appear to bisync as if they are "new or changed on both sides", but in most cases this is not a problem, as bisync will simply do its usual "equality check" and learn that no action needs to be taken on these files, since they are already identical on both sides.

    +

    In the rare event that a file is synced successfully during a run that later aborts, and then that same file changes AGAIN before the next run, bisync will think it is a sync conflict, and handle it accordingly. (From bisync's perspective, the file has changed on both sides since the last trusted sync, and the files on either side are not currently identical.) Therefore, --recover carries with it a slightly increased chance of having conflicts -- though in practice this is pretty rare, as the conditions required to cause it are quite specific. This risk can be reduced by using bisync's "Graceful Shutdown" mode (triggered by sending SIGINT or Ctrl+C), when you have the choice, instead of forcing a sudden termination.

    +

    --recover and --resilient are similar, but distinct -- the main difference is that --resilient is about retrying, while --recover is about recovering. Most users will probably want both. --resilient allows retrying when bisync has chosen to abort itself due to safety features such as failing --check-access or detecting a filter change. --resilient does not cover external interruptions such as a user shutting down their computer in the middle of a sync -- that is what --recover is for.

    +

    --max-lock

    +

    Bisync uses lock files as a safety feature to prevent interference from other bisync runs while it is running. Bisync normally removes these lock files at the end of a run, but if bisync is abruptly interrupted, these files will be left behind. By default, they will lock out all future runs, until the user has a chance to manually check things out and remove the lock. As an alternative, --max-lock can be used to make them automatically expire after a certain period of time, so that future runs are not locked out forever, and auto-recovery is possible. --max-lock can be any duration 2m or greater (or 0 to disable). If set, lock files older than this will be considered "expired", and future runs will be allowed to disregard them and proceed. (Note that the --max-lock duration must be set by the process that left the lock file -- not the later one interpreting it.)

    +

    If set, bisync will also "renew" these lock files every --max-lock minus one minute throughout a run, for extra safety. (For example, with --max-lock 5m, bisync would renew the lock file (for another 5 minutes) every 4 minutes until the run has completed.) In other words, it should not be possible for a lock file to pass its expiration time while the process that created it is still running -- and you can therefore be reasonably sure that any expired lock file you may find was left there by an interrupted run, not one that is still running and just taking awhile.

    +

    If --max-lock is 0 or not set, the default is that lock files will never expire, and will block future runs (of these same two bisync paths) indefinitely.

    +

    For maximum resilience from disruptions, consider setting a relatively short duration like --max-lock 2m along with --resilient and --recover, and a relatively frequent cron schedule. The result will be a very robust "set-it-and-forget-it" bisync run that can automatically bounce back from almost any interruption it might encounter, without requiring the user to get involved and run a --resync. (See also: Graceful Shutdown mode)

    +

    --backup-dir1 and --backup-dir2

    +

    As of v1.66, --backup-dir is supported in bisync. Because --backup-dir must be a non-overlapping path on the same remote, Bisync has introduced new --backup-dir1 and --backup-dir2 flags to support separate backup-dirs for Path1 and Path2 (bisyncing between different remotes with --backup-dir would not otherwise be possible.) --backup-dir1 and --backup-dir2 can use different remotes from each other, but --backup-dir1 must use the same remote as Path1, and --backup-dir2 must use the same remote as Path2. Each backup directory must not overlap its respective bisync Path without being excluded by a filter rule.

    +

    The standard --backup-dir will also work, if both paths use the same remote (but note that deleted files from both paths would be mixed together in the same dir). If either --backup-dir1 and --backup-dir2 are set, they will override --backup-dir.

    +

    Example:

    +
    rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case
    +

    In this example, if the user deletes a file in /Users/someuser/some/local/path/Bisync, bisync will propagate the delete to the other side by moving the corresponding file from gdrive:Bisync to gdrive:BackupDir. If the user deletes a file from gdrive:Bisync, bisync moves it from /Users/someuser/some/local/path/Bisync to /Users/someuser/some/local/path/BackupDir.

    +

    In the event of a rename due to a sync conflict, the rename is not considered a delete, unless a previous conflict with the same name already exists and would get overwritten.

    +

    See also: --suffix, --suffix-keep-extension

    Operation

    Runtime flow details

    bisync retains the listings of the Path1 and Path2 filesystems from the prior run. On each successive run it will:

    @@ -11888,7 +12597,7 @@ rclone copy Path2 Path1 --create-empty-src-dirs

    Safety measures

    -

    Amazon Drive

    -

    Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.

    -

    Status

    -

    Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive.

    -

    For the history on why rclone no longer has a set of Amazon Drive API keys see the forum.

    -

    If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks!

    -

    Configuration

    -

    The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.

    -

    The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.

    -

    Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id and client_secret with Amazon Drive, or use a third-party oauth proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.

    -

    Note also if you are not using Amazon's auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work.

    -

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    -n) New remote
    -r) Rename remote
    -c) Copy remote
    -s) Set configuration password
    -q) Quit config
    -n/r/c/s/q> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    -[snip]
    -XX / Amazon Drive
    -   \ "amazon cloud drive"
    -[snip]
    -Storage> amazon cloud drive
    -Amazon Application Client Id - required.
    -client_id> your client ID goes here
    -Amazon Application Client Secret - required.
    -client_secret> your client secret goes here
    -Auth server URL - leave blank to use Amazon's.
    -auth_url> Optional auth URL
    -Token server url - leave blank to use Amazon's.
    -token_url> Optional token URL
    -Remote config
    -Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
    -Use web browser to automatically authenticate rclone with remote?
    - * Say Y if the machine running rclone has a web browser you can use
    - * Say N if running rclone on a (remote) machine without web browser access
    -If not sure try Y. If Y failed, try N.
    -y) Yes
    -n) No
    -y/n> y
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    -Got code
    ---------------------
    -[remote]
    -client_id = your client ID goes here
    -client_secret = your client secret goes here
    -auth_url = Optional auth URL
    -token_url = Optional token URL
    -token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    -

    Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    -

    List directories in top level of your Amazon Drive

    -
    rclone lsd remote:
    -

    List all the files in your Amazon Drive

    -
    rclone ls remote:
    -

    To copy a local directory to an Amazon Drive directory called backup

    -
    rclone copy /home/source remote:backup
    -

    Modification times and hashes

    -

    Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.

    -

    It does support the MD5 hash algorithm, so for a more accurate sync, you can use the --checksum flag.

    -

    Restricted filename characters

    - - - - - - - - - - - - - - - - - - - - -
    CharacterValueReplacement
    NUL0x00
    /0x2F
    -

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Deleting files

    -

    Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.

    -

    Using with non .com Amazon accounts

    -

    Let's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.

    -

    Standard options

    -

    Here are the Standard options specific to amazon cloud drive (Amazon Drive).

    -

    --acd-client-id

    -

    OAuth Client Id.

    -

    Leave blank normally.

    -

    Properties:

    - -

    --acd-client-secret

    -

    OAuth Client Secret.

    -

    Leave blank normally.

    -

    Properties:

    -

    Advanced options

    -

    Here are the Advanced options specific to amazon cloud drive (Amazon Drive).

    -

    --acd-token

    -

    OAuth Access Token as a JSON blob.

    +

    Here are the Advanced options specific to alias (Alias for an existing remote).

    +

    --alias-description

    +

    Description of the remote

    Properties:

    -

    --acd-auth-url

    -

    Auth server URL.

    -

    Leave blank to use the provider defaults.

    -

    Properties:

    - -

    --acd-token-url

    -

    Token server url.

    -

    Leave blank to use the provider defaults.

    -

    Properties:

    - -

    --acd-checkpoint

    -

    Checkpoint for internal polling (debug).

    -

    Properties:

    - -

    --acd-upload-wait-per-gb

    -

    Additional time per GiB to wait after a failed complete upload to see if it appears.

    -

    Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1 GiB in size and nearly every time for files bigger than 10 GiB. This parameter controls the time rclone waits for the file to appear.

    -

    The default value for this parameter is 3 minutes per GiB, so by default it will wait 3 minutes for every GiB uploaded to see if the file appears.

    -

    You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.

    -

    These values were determined empirically by observing lots of uploads of big files for a range of file sizes.

    -

    Upload with the "-v" flag to see more info about what rclone is doing in this situation.

    -

    Properties:

    - - -

    Files >= this size will be downloaded via their tempLink.

    -

    Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10 GiB. The default for this is 9 GiB which shouldn't need to be changed.

    -

    To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.

    -

    Properties:

    - -

    --acd-encoding

    -

    The encoding for the backend.

    -

    See the encoding section in the overview for more info.

    -

    Properties:

    - -

    Limitations

    -

    Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.

    -

    Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.

    -

    At the time of writing (Jan 2016) is in the area of 50 GiB per file. This means that larger files are likely to fail.

    -

    Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.

    -

    rclone about is not supported by the Amazon Drive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about and rclone about

    Amazon S3 Storage Providers

    The S3 backend can be used with a number of different providers:

    When using the lsd subcommand, the ListAllMyBuckets permission is required.

    Example policy:

    @@ -13468,6 +14004,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
    1. This is a policy that can be used when creating bucket. It assumes that USER_NAME has been created.
    2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.
    3. +
    4. When using s3-no-check-bucket and the bucket already exsits, the "arn:aws:s3:::BUCKET_NAME" doesn't have to be included.

    For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync.

    Key Management System (KMS)

    @@ -13483,7 +14020,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test

    If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.

    As mentioned in the Modification times and hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0 and force all the files to be uploaded as multipart.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).

    --s3-provider

    Choose your S3 provider.

    @@ -14315,8 +14852,8 @@ Windows: "%USERPROFILE%\.aws\credentials"
  • Required: false
  • --s3-upload-concurrency

    -

    Concurrency for multipart uploads.

    -

    This is the number of chunks of the same file that are uploaded concurrently.

    +

    Concurrency for multipart uploads and copies.

    +

    This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies.

    If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

    Properties:

    +

    --s3-use-dual-stack

    +

    If true use AWS S3 dual-stack endpoint (IPv6 support).

    +

    See AWS Docs on Dualstack Endpoints

    +

    Properties:

    +

    --s3-use-accelerate-endpoint

    If true use the AWS S3 accelerated endpoint.

    See: AWS S3 Transfer acceleration

    @@ -14546,6 +15093,18 @@ Windows: "%USERPROFILE%\.aws\credentials"
  • Type: Time
  • Default: off
  • +

    --s3-version-deleted

    +

    Show deleted file markers when using versions.

    +

    This shows deleted file markers in the listing when using versions. These will appear as 0 size files. The only operation which can be performed on them is deletion.

    +

    Deleting a delete marker will reveal the previous version.

    +

    Deleted files will always show with a timestamp.

    +

    Properties:

    +

    --s3-decompress

    If set this will decompress gzip encoded objects.

    It is possible to upload objects to S3 with "Content-Encoding: gzip" set. Normally rclone will download these files as compressed objects.

    @@ -14631,6 +15190,15 @@ Windows: "%USERPROFILE%\.aws\credentials"
  • Type: Tristate
  • Default: unset
  • +

    --s3-description

    +

    Description of the remote

    +

    Properties:

    +

    Metadata

    User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.

    Here are the possible system metadata items for the s3 backend.

    @@ -15067,10 +15635,10 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] -Storage> 5 +Storage> s3 Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -15185,18 +15753,11 @@ e/n/d/r/c/s/q> q
  • Select "s3" storage.
  • Choose a number from below, or type in your own value
    -    1 / Alias for an existing remote
    -    \ "alias"
    -    2 / Amazon Drive
    -    \ "amazon cloud drive"
    -    3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, Liara, ArvanCloud, Minio, IBM COS)
    -    \ "s3"
    -    4 / Backblaze B2
    -    \ "b2"
     [snip]
    -    23 / HTTP
    -    \ "http"
    -Storage> 3
    +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" +[snip] +Storage> s3
    1. Select IBM COS as the S3 Storage Provider.
    @@ -15339,7 +15900,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -15434,7 +15995,7 @@ name> ionos-fra Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -15616,15 +16177,8 @@ n/s/q> n
  • Select s3 storage.
  • Choose a number from below, or type in your own value
    - 1 / 1Fichier
    -   \ (fichier)
    - 2 / Akamai NetStorage
    -   \ (netstorage)
    - 3 / Alias for an existing remote
    -   \ (alias)
    - 4 / Amazon Drive
    -   \ (amazon cloud drive)
    - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi
    +[snip]
    +XX / Amazon S3 Compliant Storage Providers including AWS, ...
        \ (s3)
     [snip]
     Storage> s3
    @@ -15837,7 +16391,7 @@ name> remote
    Type of storage to configure.
     Choose a number from below, or type in your own value.
     [snip]
    -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
    +XX / Amazon S3 Compliant Storage Providers including AWS, ...
        \ (s3)
     [snip]
     Storage> s3
    @@ -16064,7 +16618,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" [snip] Storage> s3 @@ -16166,7 +16720,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. ... - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) ... Storage> s3 @@ -16414,15 +16968,8 @@ n/s/q> n
  • Select s3 storage.
  • Choose a number from below, or type in your own value
    - 1 / 1Fichier
    -   \ (fichier)
    - 2 / Akamai NetStorage
    -   \ (netstorage)
    - 3 / Alias for an existing remote
    -   \ (alias)
    - 4 / Amazon Drive
    -   \ (amazon cloud drive)
    - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi
    +[snip]
    +XX / Amazon S3 Compliant Storage Providers including AWS, ...
        \ (s3)
     [snip]
     Storage> s3
    @@ -16609,7 +17156,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others +XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others \ (s3) [snip] Storage> s3 @@ -16828,13 +17375,8 @@ n/s/q> n
  • Select s3 storage.
  • Choose a number from below, or type in your own value
    -1 / 1Fichier
    -   \ "fichier"
    - 2 / Alias for an existing remote
    -   \ "alias"
    - 3 / Amazon Drive
    -   \ "amazon cloud drive"
    - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS
    +[snip]
    +XX / Amazon S3 Compliant Storage Providers including AWS, ...
        \ "s3"
     [snip]
     Storage> s3
    @@ -16928,7 +17470,7 @@ cos s3

    For Netease NOS configure as per the configurator rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly.

    Petabox

    Here is an example of making a Petabox configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    No remotes found, make a new one?
     n) New remote
    @@ -17162,7 +17704,7 @@ Type of storage to configure.
     Enter a string value. Press Enter for the default ("").
     Choose a number from below, or type in your own value
     
    - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi
    +XX / Amazon S3 Compliant Storage Providers including AWS, ...
        \ "s3"
     
     Storage> s3
    @@ -17729,9 +18271,12 @@ Properties:
     
     #### --b2-download-auth-duration
     
    -Time before the authorization token will expire in s or suffix ms|s|m|h|d.
    +Time before the public link authorization token will expire in s or suffix ms|s|m|h|d.
    +
    +This is used in combination with "rclone link" for making files
    +accessible to the public and sets the duration before the download
    +authorization token will expire.
     
    -The duration before the download authorization token will expire.
     The minimum value is 1 second. The maximum value is one week.
     
     Properties:
    @@ -17807,6 +18352,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
     
    +#### --b2-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_B2_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ## Backend commands
     
     Here are the commands specific to the b2 backend.
    @@ -18266,6 +18822,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
     
    +#### --box-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_BOX_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -18898,6 +19465,17 @@ Properties:
     - Type:        Duration
     - Default:     1s
     
    +#### --cache-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_CACHE_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ## Backend commands
     
     Here are the commands specific to the cache backend.
    @@ -19338,6 +19916,17 @@ Properties:
             - If meta format is set to "none", rename transactions will always be used.
             - This method is EXPERIMENTAL, don't use on production systems.
     
    +#### --chunker-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_CHUNKER_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     #  Citrix ShareFile
    @@ -19583,6 +20172,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
     
    +#### --sharefile-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_SHAREFILE_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     ## Limitations
     
    @@ -20053,6 +20653,22 @@ Properties:
     - Type:        bool
     - Default:     false
     
    +#### --crypt-strict-names
    +
    +If set, this will raise an error when crypt comes across a filename that can't be decrypted.
    +
    +(By default, rclone will just log a NOTICE and continue as normal.)
    +This can happen if encrypted and unencrypted files are stored in the same
    +directory (which is not recommended.) It may also indicate a more serious
    +problem that should be investigated.
    +
    +Properties:
    +
    +- Config:      strict_names
    +- Env Var:     RCLONE_CRYPT_STRICT_NAMES
    +- Type:        bool
    +- Default:     false
    +
     #### --crypt-filename-encoding
     
     How to encode the encrypted filename to text string.
    @@ -20090,6 +20706,17 @@ Properties:
     - Type:        string
     - Default:     ".bin"
     
    +#### --crypt-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_CRYPT_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ### Metadata
     
     Any metadata supported by the underlying remote is read and written.
    @@ -20258,7 +20885,7 @@ encoding is modified in two ways:
       * we strip the padding character `=`
     
     `base32` is used rather than the more efficient `base64` so rclone can be
    -used on case insensitive remotes (e.g. Windows, Amazon Drive).
    +used on case insensitive remotes (e.g. Windows, Box, Dropbox, Onedrive etc).
     
     ### Key derivation
     
    @@ -20391,6 +21018,17 @@ Properties:
     - Type:        SizeSuffix
     - Default:     20Mi
     
    +#### --compress-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_COMPRESS_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ### Metadata
     
     Any metadata supported by the underlying remote is read and written.
    @@ -20496,6 +21134,21 @@ Properties:
     - Type:        SpaceSepList
     - Default:     
     
    +### Advanced options
    +
    +Here are the Advanced options specific to combine (Combine several remotes into one).
    +
    +#### --combine-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_COMBINE_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ### Metadata
     
     Any metadata supported by the underlying remote is read and written.
    @@ -20929,6 +21582,17 @@ Properties:
     - Type:        Duration
     - Default:     10m0s
     
    +#### --dropbox-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_DROPBOX_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -21189,6 +21853,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,Del,Ctl,InvalidUtf8,Dot
     
    +#### --filefabric-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_FILEFABRIC_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     #  FTP
    @@ -21585,6 +22260,17 @@ Properties:
         - "Ctl,LeftPeriod,Slash"
             - VsFTPd can't handle file names starting with dot
     
    +#### --ftp-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_FTP_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -22227,6 +22913,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,CrLf,InvalidUtf8,Dot
     
    +#### --gcs-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_GCS_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -23507,10 +24204,23 @@ Properties:
         - "true"
             - Get GCP IAM credentials from the environment (env vars or IAM).
     
    +#### --drive-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_DRIVE_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ### Metadata
     
     User metadata is stored in the properties field of the drive object.
     
    +Metadata is supported on files and directories.
    +
     Here are the possible system metadata items for the drive backend.
     
     | Name | Help | Type | Example | Read Only |
    @@ -24247,6 +24957,18 @@ This will guide you through an interactive setup process:
     - Config: batch_commit_timeout - Env Var: RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT - Type: Duration - Default: 10m0s
     
     
    +#### --gphotos-description
    +
    +
    +Description of the remote
    +
    +
    +Properties:
    +
    +
    +- Config: description - Env Var: RCLONE_GPHOTOS_DESCRIPTION - Type: string - Required: false
    +
    +
     ## Limitations
     
     
    @@ -24489,6 +25211,17 @@ Properties:
     - Type:        SizeSuffix
     - Default:     0
     
    +#### --hasher-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_HASHER_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ### Metadata
     
     Any metadata supported by the underlying remote is read and written.
    @@ -24780,6 +25513,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,Colon,Del,Ctl,InvalidUtf8,Dot
     
    +#### --hdfs-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_HDFS_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -25160,6 +25904,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,Dot
     
    +#### --hidrive-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_HIDRIVE_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -25363,6 +26118,17 @@ Properties:
     - Type:        bool
     - Default:     false
     
    +#### --http-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_HTTP_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ## Backend commands
     
     Here are the commands specific to the http backend.
    @@ -25550,6 +26316,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket
     
    +#### --imagekit-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_IMAGEKIT_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ### Metadata
     
     Any metadata supported by the underlying remote is read and written.
    @@ -25768,6 +26545,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
     
    +#### --internetarchive-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_INTERNETARCHIVE_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ### Metadata
     
     Metadata fields provided by Internet Archive.
    @@ -26157,6 +26945,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
     
    +#### --jottacloud-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_JOTTACLOUD_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ### Metadata
     
     Jottacloud has limited support for metadata, currently an extended set of timestamps.
    @@ -26344,6 +27143,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
     
    +#### --koofr-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_KOOFR_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -26418,6 +27228,21 @@ Properties:
     - Type:        string
     - Required:    true
     
    +### Advanced options
    +
    +Here are the Advanced options specific to linkbox (Linkbox).
    +
    +#### --linkbox-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_LINKBOX_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -26776,6 +27601,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
     
    +#### --mailru-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_MAILRU_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ## Limitations
    @@ -27019,6 +27855,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,InvalidUtf8,Dot
     
    +#### --mega-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_MEGA_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ### Process `killed`
    @@ -27083,6 +27930,21 @@ The memory backend replaces the [default restricted characters
     set](https://rclone.org/overview/#restricted-characters).
     
     
    +### Advanced options
    +
    +Here are the Advanced options specific to memory (In memory object storage system.).
    +
    +#### --memory-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_MEMORY_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     #  Akamai NetStorage
    @@ -27280,6 +28142,17 @@ Properties:
         - "https"
             - HTTPS protocol
     
    +#### --netstorage-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_NETSTORAGE_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     ## Backend commands
     
     Here are the commands specific to the netstorage backend.
    @@ -28114,6 +28987,35 @@ Properties:
     - Type:        bool
     - Default:     false
     
    +#### --azureblob-delete-snapshots
    +
    +Set to specify how to deal with snapshots on blob deletion.
    +
    +Properties:
    +
    +- Config:      delete_snapshots
    +- Env Var:     RCLONE_AZUREBLOB_DELETE_SNAPSHOTS
    +- Type:        string
    +- Required:    false
    +- Choices:
    +    - ""
    +        - By default, the delete operation fails if a blob has snapshots
    +    - "include"
    +        - Specify 'include' to remove the root blob and all its snapshots
    +    - "only"
    +        - Specify 'only' to remove only the snapshots but keep the root blob.
    +
    +#### --azureblob-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_AZUREBLOB_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ### Custom upload headers
    @@ -28783,6 +29685,17 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot
     
    +#### --azurefiles-description
    +
    +Description of the remote
    +
    +Properties:
    +
    +- Config:      description
    +- Env Var:     RCLONE_AZUREFILES_DESCRIPTION
    +- Type:        string
    +- Required:    false
    +
     
     
     ### Custom upload headers
    @@ -29372,7 +30285,7 @@ Properties:
     
     If set rclone will use delta listing to implement recursive listings.
     
    -If this flag is set the the onedrive backend will advertise `ListR`
    +If this flag is set the onedrive backend will advertise `ListR`
     support for recursive listings.
     
     Setting this flag speeds up these things greatly:
    @@ -29405,6 +30318,30 @@ Properties:
     - Type:        bool
     - Default:     false
     
    +#### --onedrive-metadata-permissions
    +
    +Control whether permissions should be read or written in metadata.
    +
    +Reading permissions metadata from files can be done quickly, but it
    +isn't always desirable to set the permissions from the metadata.
    +
    +
    +Properties:
    +
    +- Config:      metadata_permissions
    +- Env Var:     RCLONE_ONEDRIVE_METADATA_PERMISSIONS
    +- Type:        Bits
    +- Default:     off
    +- Examples:
    +    - "off"
    +        - Do not read or write the value
    +    - "read"
    +        - Read the value only
    +    - "write"
    +        - Write the value only
    +    - "read,write"
    +        - Read and Write the value.
    +
     #### --onedrive-encoding
     
     The encoding for the backend.
    @@ -29418,4068 +30355,3871 @@ Properties:
     - Type:        Encoding
     - Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
     
    +#### --onedrive-description
     
    -
    -## Limitations
    -
    -If you don't use rclone for 90 days the refresh token will
    -expire. This will result in authorization problems. This is easy to
    -fix by running the `rclone config reconnect remote:` command to get a
    -new token and refresh token.
    -
    -### Naming
    -
    -Note that OneDrive is case insensitive so you can't have a
    -file called "Hello.doc" and one called "hello.doc".
    -
    -There are quite a few characters that can't be in OneDrive file
    -names.  These can't occur on Windows platforms, but on non-Windows
    -platforms they are common.  Rclone will map these names to and from an
    -identical looking unicode equivalent.  For example if a file has a `?`
    -in it will be mapped to `?` instead.
    -
    -### File sizes
    -
    -The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize).
    -
    -### Path length
    -
    -The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.
    -
    -### Number of files
    -
    -OneDrive seems to be OK with at least 50,000 files in a folder, but at
    -100,000 rclone will get errors listing the directory like `couldn’t
    -list files: UnknownError:`.  See
    -[#2707](https://github.com/rclone/rclone/issues/2707) for more info.
    -
    -An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa).
    -
    -## Versions
    -
    -Every change in a file OneDrive causes the service to create a new
    -version of the file.  This counts against a users quota.  For
    -example changing the modification time of a file creates a second
    -version, so the file apparently uses twice the space.
    -
    -For example the `copy` command is affected by this as rclone copies
    -the file and then afterwards sets the modification time to match the
    -source file which uses another version.
    -
    -You can use the `rclone cleanup` command (see below) to remove all old
    -versions.
    -
    -Or you can set the `no_versions` parameter to `true` and rclone will
    -remove versions after operations which create new versions. This takes
    -extra transactions so only enable it if you need it.
    -
    -**Note** At the time of writing Onedrive Personal creates versions
    -(but not for setting the modification time) but the API for removing
    -them returns "API not found" so cleanup and `no_versions` should not
    -be used on Onedrive Personal.
    -
    -### Disabling versioning
    -
    -Starting October 2018, users will no longer be able to
    -disable versioning by default. This is because Microsoft has brought
    -an
    -[update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390)
    -to the mechanism. To change this new default setting, a PowerShell
    -command is required to be run by a SharePoint admin. If you are an
    -admin, you can run these commands in PowerShell to change that
    -setting:
    -
    -1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already)
    -2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking`
    -3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials)
    -4. `Set-SPOTenant -EnableMinimumVersionRequirement $False`
    -5. `Disconnect-SPOService` (to disconnect from the server)
    -
    -*Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.*
    -
    -User [Weropol](https://github.com/Weropol) has found a method to disable
    -versioning on OneDrive
    -
    -1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
    -2. Click Site settings.
    -3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists.
    -4. Click Customize "Documents".
    -5. Click General Settings > Versioning Settings.
    -6. Under Document Version History select the option No versioning.
    -Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe.
    -7. Apply the changes by clicking OK.
    -8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
    -9. Restore the versioning settings after using rclone. (Optional)
    -
    -## Cleanup
    -
    -OneDrive supports `rclone cleanup` which causes rclone to look through
    -every file under the path supplied and delete all version but the
    -current version. Because this involves traversing all the files, then
    -querying each file for versions it can be quite slow. Rclone does
    -`--checkers` tests in parallel. The command also supports `--interactive`/`i`
    -or `--dry-run` which is a great way to see what it would do.
    -
    -    rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
    -    rclone cleanup remote:path/subdir               # unconditionally remove all old version for path/subdir
    -
    -**NB** Onedrive personal can't currently delete versions
    -
    -## Troubleshooting ##
    -
    -### Excessive throttling or blocked on SharePoint
    -
    -If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"`
    -
    -The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling)
    -
    -### Unexpected file size/hash differences on Sharepoint ####
    -
    -It is a
    -[known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631)
    -issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies
    -uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and
    -hash checks to fail. There are also other situations that will cause OneDrive to
    -report inconsistent file sizes. To use rclone with such
    -affected files on Sharepoint, you
    -may disable these checks with the following command line arguments:
    -
    -

    --ignore-checksum --ignore-size

    -
    
    -Alternatively, if you have write access to the OneDrive files, it may be possible
    -to fix this problem for certain files, by attempting the steps below.
    -Open the web interface for [OneDrive](https://onedrive.live.com) and find the
    -affected files (which will be in the error messages/log for rclone). Simply click on
    -each of these files, causing OneDrive to open them on the web. This will cause each
    -file to be converted in place to a format that is functionally equivalent
    -but which will no longer trigger the size discrepancy. Once all problematic files
    -are converted you will no longer need the ignore options above.
    -
    -### Replacing/deleting existing files on Sharepoint gets "item not found" ####
    -
    -It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue
    -that Sharepoint (not OneDrive or OneDrive for Business) may return "item not
    -found" errors when users try to replace or delete uploaded files; this seems to
    -mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use
    -the `--backup-dir <BACKUP_DIR>` command line argument so rclone moves the
    -files to be replaced/deleted into a given backup directory (instead of directly
    -replacing/deleting them). For example, to instruct rclone to move the files into
    -the directory `rclone-backup-dir` on backend `mysharepoint`, you may use:
    -
    -

    --backup-dir mysharepoint:rclone-backup-dir

    -
    
    -### access\_denied (AADSTS65005) ####
    -
    -

    Error: access_denied Code: AADSTS65005 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.

    -
    
    -This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.
    -
    -However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint
    -
    -### invalid\_grant (AADSTS50076) ####
    -
    -

    Error: invalid_grant Code: AADSTS50076 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.

    -
    
    -If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
    -
    -### Invalid request when making public links ####
    -
    -On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid
    -request" error. A possible cause is that the organisation admin didn't allow
    -public links to be made for the organisation/sharepoint library. To fix the
    -permissions as an admin, take a look at the docs:
    -[1](https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off),
    -[2](https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3).
    -
    -### Can not access `Shared` with me files
    -
    -Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround:
    -
    -1. Visit [https://onedrive.live.com](https://onedrive.live.com/)
    -2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context
    -    ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png "Screenshot (Shared with me)")
    -3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file.
    -    ![in_my_files](https://i.imgur.com/0S8H3li.png "Screenshot (My Files)")
    -    ![rclone_mount](https://i.imgur.com/2Iq66sW.png "Screenshot (rclone mount)")
    -
    -### Live Photos uploaded from iOS (small video clips in .heic files)
    -
    -The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) 
    -of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. 
    -The usage and download of these uploaded Live Photos is unfortunately still work-in-progress 
    -and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.
    -
    -The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. 
    -Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. 
    -The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.
    -
    -The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this:
    -
    -    DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
    -    DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
    -    INFO  : 20230203_123826234_iOS.heic: Copied (replaced existing)
    -
    -These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip, 
    -and relies on modification dates being correctly updated on all files in all situations.
    -
    -The different sizes will also cause `rclone check` to report size errors something like this:
    -
    -    ERROR : 20230203_123826234_iOS.heic: sizes differ
    -
    -These check errors can be suppressed by adding `--ignore-size`.
    -
    -The different sizes will also cause `rclone mount` to fail downloading with an error something like this:
    -
    -    ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
    -
    -or like this when using `--cache-mode=full`:
    -
    -    INFO  : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
    -    ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
    -
    -#  OpenDrive
    -
    -Paths are specified as `remote:path`
    -
    -Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
    -
    -## Configuration
    -
    -Here is an example of how to make a remote called `remote`.  First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -
      -
    1. New remote
    2. -
    3. Delete remote
    4. -
    5. Quit config e/n/d/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenDrive  "opendrive" [snip] Storage> opendrive Username username> Password
    6. -
    7. Yes type in my own password
    8. -
    9. Generate random password y/g> y Enter the password: password: Confirm the password: password: -------------------- [remote] username = password = *** ENCRYPTED *** --------------------
    10. -
    11. Yes this is OK
    12. -
    13. Edit this remote
    14. -
    15. Delete this remote y/e/d> y
    16. -
    -
    
    -List directories in top level of your OpenDrive
    -
    -    rclone lsd remote:
    -
    -List all the files in your OpenDrive
    -
    -    rclone ls remote:
    -
    -To copy a local directory to an OpenDrive directory called backup
    -
    -    rclone copy /home/source remote:backup
    -
    -### Modification times and hashes
    -
    -OpenDrive allows modification times to be set on objects accurate to 1
    -second. These will be used to detect whether objects need syncing or
    -not.
    -
    -The MD5 hash algorithm is supported.
    -
    -### Restricted filename characters
    -
    -| Character | Value | Replacement |
    -| --------- |:-----:|:-----------:|
    -| NUL       | 0x00  | ␀           |
    -| /         | 0x2F  | /          |
    -| "         | 0x22  | "          |
    -| *         | 0x2A  | *          |
    -| :         | 0x3A  | :          |
    -| <         | 0x3C  | <          |
    -| >         | 0x3E  | >          |
    -| ?         | 0x3F  | ?          |
    -| \         | 0x5C  | \          |
    -| \|        | 0x7C  | |          |
    -
    -File names can also not begin or end with the following characters.
    -These only get replaced if they are the first or last character in the name:
    -
    -| Character | Value | Replacement |
    -| --------- |:-----:|:-----------:|
    -| SP        | 0x20  | ␠           |
    -| HT        | 0x09  | ␉           |
    -| LF        | 0x0A  | ␊           |
    -| VT        | 0x0B  | ␋           |
    -| CR        | 0x0D  | ␍           |
    -
    -
    -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    -as they can't be used in JSON strings.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to opendrive (OpenDrive).
    -
    -#### --opendrive-username
    -
    -Username.
    +Description of the remote
     
     Properties:
     
    -- Config:      username
    -- Env Var:     RCLONE_OPENDRIVE_USERNAME
    -- Type:        string
    -- Required:    true
    -
    -#### --opendrive-password
    -
    -Password.
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      password
    -- Env Var:     RCLONE_OPENDRIVE_PASSWORD
    -- Type:        string
    -- Required:    true
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to opendrive (OpenDrive).
    -
    -#### --opendrive-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_OPENDRIVE_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
    -
    -#### --opendrive-chunk-size
    -
    -Files will be uploaded in chunks this size.
    -
    -Note that these chunks are buffered in memory so increasing them will
    -increase memory use.
    -
    -Properties:
    -
    -- Config:      chunk_size
    -- Env Var:     RCLONE_OPENDRIVE_CHUNK_SIZE
    -- Type:        SizeSuffix
    -- Default:     10Mi
    -
    -
    -
    -## Limitations
    -
    -Note that OpenDrive is case insensitive so you can't have a
    -file called "Hello.doc" and one called "hello.doc".
    -
    -There are quite a few characters that can't be in OpenDrive file
    -names.  These can't occur on Windows platforms, but on non-Windows
    -platforms they are common.  Rclone will map these names to and from an
    -identical looking unicode equivalent.  For example if a file has a `?`
    -in it will be mapped to `?` instead.
    -
    -`rclone about` is not supported by the OpenDrive backend. Backends without
    -this capability cannot determine free space for an rclone mount or
    -use policy `mfs` (most free space) as a member of an rclone union
    -remote.
    -
    -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
    -
    -#  Oracle Object Storage
    -- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm)
    -- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/)
    -- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf)
    -
    -Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.)  You may put subdirectories in 
    -too, e.g. `remote:bucket/path/to/dir`.
    -
    -Sample command to transfer local artifacts to remote:bucket in oracle object storage:
    -
    -`rclone -vvv  --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64  --retries 2  --oos-chunk-size 10Mi --oos-upload-concurrency 10000  --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts  remote:bucket -vv`
    -
    -## Configuration
    -
    -Here is an example of making an oracle object storage configuration. `rclone config` walks you 
    -through it.
    -
    -Here is an example of how to make a remote called `remote`.  First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -
    -
      -
    1. New remote
    2. -
    3. Delete remote
    4. -
    5. Rename remote
    6. -
    7. Copy remote
    8. -
    9. Set configuration password
    10. -
    11. Quit config e/n/d/r/c/s/q> n
    12. -
    -

    Enter name for new remote. name> remote

    -

    Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] XX / Oracle Cloud Infrastructure Object Storage  (oracleobjectstorage) Storage> oracleobjectstorage

    -

    Option provider. Choose your Auth Provider Choose a number from below, or type in your own string value. Press Enter for the default (env_auth). 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins  (env_auth) / use an OCI user and an API key for authentication. 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm  (user_principal_auth) / use instance principals to authorize an instance to make API calls. 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm  (instance_principal_auth) 4 / use resource principals to make API calls  (resource_principal_auth) 5 / no credentials needed, this is typically for reading public buckets  (no_auth) provider> 2

    -

    Option namespace. Object storage namespace Enter a value. namespace> idbamagbg734

    -

    Option compartment. Object storage compartment OCID Enter a value. compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba

    -

    Option region. Object storage Region Enter a value. region> us-ashburn-1

    -

    Option endpoint. Endpoint for Object storage API. Leave blank to use the default endpoint for the region. Enter a value. Press Enter to leave empty. endpoint>

    -

    Option config_file. Full Path to OCI config file Choose a number from below, or type in your own string value. Press Enter for the default (~/.oci/config). 1 / oci configuration file location  (~/.oci/config) config_file> /etc/oci/dev.conf

    -

    Option config_profile. Profile name inside OCI config file Choose a number from below, or type in your own string value. Press Enter for the default (Default). 1 / Use the default profile  (Default) config_profile> Test

    -

    Edit advanced config? y) Yes n) No (default) y/n> n

    -

    Configuration complete. Options: - type: oracleobjectstorage - namespace: idbamagbg734 - compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba - region: us-ashburn-1 - provider: user_principal_auth - config_file: /etc/oci/dev.conf - config_profile: Test Keep this "remote" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -See all buckets
    -
    -    rclone lsd remote:
    -
    -Create a new bucket
    -
    -    rclone mkdir remote:bucket
    -
    -List the contents of a bucket
    -
    -    rclone ls remote:bucket
    -    rclone ls remote:bucket --max-depth 1
    -
    -## Authentication Providers 
    -
    -OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication 
    -methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) 
    -These choices can be specified in the rclone config file.
    -
    -Rclone supports the following OCI authentication provider.
    -
    -    User Principal
    -    Instance Principal
    -    Resource Principal
    -    No authentication
    -
    -### User Principal
    -
    -Sample rclone config file for Authentication Provider User Principal:
    -
    -    [oos]
    -    type = oracleobjectstorage
    -    namespace = id<redacted>34
    -    compartment = ocid1.compartment.oc1..aa<redacted>ba
    -    region = us-ashburn-1
    -    provider = user_principal_auth
    -    config_file = /home/opc/.oci/config
    -    config_profile = Default
    -
    -Advantages:
    -- One can use this method from any server within OCI or on-premises or from other cloud provider.
    -
    -Considerations:
    -- you need to configure user’s privileges / policy to allow access to object storage
    -- Overhead of managing users and keys.
    -- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
    -
    -###  Instance Principal
    -
    -An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. 
    -With this approach no credentials have to be stored and managed.
    -
    -Sample rclone configuration file for Authentication Provider Instance Principal:
    -
    -    [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
    -    [oos]
    -    type = oracleobjectstorage
    -    namespace = id<redacted>fn
    -    compartment = ocid1.compartment.oc1..aa<redacted>k7a
    -    region = us-ashburn-1
    -    provider = instance_principal_auth
    -
    -Advantages:
    -
    -- With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute 
    -  instances or rotate the credentials.
    -- You don’t need to deal with users and keys.
    -- Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault, 
    -  using kms etc.
    -
    -Considerations:
    -
    -- You need to configure a dynamic group having this instance as member and add policy to read object storage to that 
    -  dynamic group.
    -- Everyone who has access to this machine can execute the CLI commands.
    -- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
    -
    -### Resource Principal
    -
    -Resource principal auth is very similar to instance principal auth but used for resources that are not 
    -compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). 
    -To use resource principal ensure Rclone process is started with these environment variables set in its process.
    -
    -    export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
    -    export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
    -    export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
    -    export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
    -
    -Sample rclone configuration file for Authentication Provider Resource Principal:
    -
    -    [oos]
    -    type = oracleobjectstorage
    -    namespace = id<redacted>34
    -    compartment = ocid1.compartment.oc1..aa<redacted>ba
    -    region = us-ashburn-1
    -    provider = resource_principal_auth
    -
    -### No authentication
    -
    -Public buckets do not require any authentication mechanism to read objects.
    -Sample rclone configuration file for No authentication:
    -    
    -    [oos]
    -    type = oracleobjectstorage
    -    namespace = id<redacted>34
    -    compartment = ocid1.compartment.oc1..aa<redacted>ba
    -    region = us-ashburn-1
    -    provider = no_auth
    -
    -### Modification times and hashes
    -
    -The modification time is stored as metadata on the object as
    -`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
    -
    -If the modification time needs to be updated rclone will attempt to perform a server
    -side copy to update the modification if the object can be copied in a single part.
    -In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
    -
    -Note that reading this from the object takes an additional `HEAD` request as the metadata
    -isn't returned in object listings.
    -
    -The MD5 hash algorithm is supported.
    -
    -### Multipart uploads
    -
    -rclone supports multipart uploads with OOS which means that it can
    -upload files bigger than 5 GiB.
    -
    -Note that files uploaded *both* with multipart upload *and* through
    -crypt remotes do not have MD5 sums.
    -
    -rclone switches from single part uploads to multipart uploads at the
    -point specified by `--oos-upload-cutoff`.  This can be a maximum of 5 GiB
    -and a minimum of 0 (ie always upload multipart files).
    -
    -The chunk sizes used in the multipart upload are specified by
    -`--oos-chunk-size` and the number of chunks uploaded concurrently is
    -specified by `--oos-upload-concurrency`.
    -
    -Multipart uploads will use `--transfers` * `--oos-upload-concurrency` *
    -`--oos-chunk-size` extra memory.  Single part uploads to not use extra
    -memory.
    -
    -Single part transfers can be faster than multipart transfers or slower
    -depending on your latency from oos - the more latency, the more likely
    -single part transfers will be faster.
    -
    -Increasing `--oos-upload-concurrency` will increase throughput (8 would
    -be a sensible value) and increasing `--oos-chunk-size` also increases
    -throughput (16M would be sensible).  Increasing either of these will
    -use more memory.  The default values are high enough to gain most of
    -the possible performance without using too much memory.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
    -
    -#### --oos-provider
    -
    -Choose your Auth Provider
    -
    -Properties:
    -
    -- Config:      provider
    -- Env Var:     RCLONE_OOS_PROVIDER
    -- Type:        string
    -- Default:     "env_auth"
    -- Examples:
    -    - "env_auth"
    -        - automatically pickup the credentials from runtime(env), first one to provide auth wins
    -    - "user_principal_auth"
    -        - use an OCI user and an API key for authentication.
    -        - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
    -        - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
    -    - "instance_principal_auth"
    -        - use instance principals to authorize an instance to make API calls. 
    -        - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. 
    -        - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
    -    - "resource_principal_auth"
    -        - use resource principals to make API calls
    -    - "no_auth"
    -        - no credentials needed, this is typically for reading public buckets
    -
    -#### --oos-namespace
    -
    -Object storage namespace
    -
    -Properties:
    -
    -- Config:      namespace
    -- Env Var:     RCLONE_OOS_NAMESPACE
    -- Type:        string
    -- Required:    true
    -
    -#### --oos-compartment
    -
    -Object storage compartment OCID
    -
    -Properties:
    -
    -- Config:      compartment
    -- Env Var:     RCLONE_OOS_COMPARTMENT
    -- Provider:    !no_auth
    -- Type:        string
    -- Required:    true
    -
    -#### --oos-region
    -
    -Object storage Region
    -
    -Properties:
    -
    -- Config:      region
    -- Env Var:     RCLONE_OOS_REGION
    -- Type:        string
    -- Required:    true
    -
    -#### --oos-endpoint
    -
    -Endpoint for Object storage API.
    -
    -Leave blank to use the default endpoint for the region.
    -
    -Properties:
    -
    -- Config:      endpoint
    -- Env Var:     RCLONE_OOS_ENDPOINT
    +- Config:      description
    +- Env Var:     RCLONE_ONEDRIVE_DESCRIPTION
     - Type:        string
     - Required:    false
     
    -#### --oos-config-file
    -
    -Path to OCI config file
    -
    -Properties:
    -
    -- Config:      config_file
    -- Env Var:     RCLONE_OOS_CONFIG_FILE
    -- Provider:    user_principal_auth
    -- Type:        string
    -- Default:     "~/.oci/config"
    -- Examples:
    -    - "~/.oci/config"
    -        - oci configuration file location
    -
    -#### --oos-config-profile
    -
    -Profile name inside the oci config file
    -
    -Properties:
    -
    -- Config:      config_profile
    -- Env Var:     RCLONE_OOS_CONFIG_PROFILE
    -- Provider:    user_principal_auth
    -- Type:        string
    -- Default:     "Default"
    -- Examples:
    -    - "Default"
    -        - Use the default profile
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
    -
    -#### --oos-storage-tier
    -
    -The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
    -
    -Properties:
    -
    -- Config:      storage_tier
    -- Env Var:     RCLONE_OOS_STORAGE_TIER
    -- Type:        string
    -- Default:     "Standard"
    -- Examples:
    -    - "Standard"
    -        - Standard storage tier, this is the default tier
    -    - "InfrequentAccess"
    -        - InfrequentAccess storage tier
    -    - "Archive"
    -        - Archive storage tier
    -
    -#### --oos-upload-cutoff
    -
    -Cutoff for switching to chunked upload.
    -
    -Any files larger than this will be uploaded in chunks of chunk_size.
    -The minimum is 0 and the maximum is 5 GiB.
    -
    -Properties:
    -
    -- Config:      upload_cutoff
    -- Env Var:     RCLONE_OOS_UPLOAD_CUTOFF
    -- Type:        SizeSuffix
    -- Default:     200Mi
    -
    -#### --oos-chunk-size
    -
    -Chunk size to use for uploading.
    -
    -When uploading files larger than upload_cutoff or files with unknown
    -size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will be uploaded 
    -as multipart uploads using this chunk size.
    -
    -Note that "upload_concurrency" chunks of this size are buffered
    -in memory per transfer.
    -
    -If you are transferring large files over high-speed links and you have
    -enough memory, then increasing this will speed up the transfers.
    -
    -Rclone will automatically increase the chunk size when uploading a
    -large file of known size to stay below the 10,000 chunks limit.
    -
    -Files of unknown size are uploaded with the configured
    -chunk_size. Since the default chunk size is 5 MiB and there can be at
    -most 10,000 chunks, this means that by default the maximum size of
    -a file you can stream upload is 48 GiB.  If you wish to stream upload
    -larger files then you will need to increase chunk_size.
    -
    -Increasing the chunk size decreases the accuracy of the progress
    -statistics displayed with "-P" flag.
    -
    -
    -Properties:
    -
    -- Config:      chunk_size
    -- Env Var:     RCLONE_OOS_CHUNK_SIZE
    -- Type:        SizeSuffix
    -- Default:     5Mi
    -
    -#### --oos-max-upload-parts
    -
    -Maximum number of parts in a multipart upload.
    -
    -This option defines the maximum number of multipart chunks to use
    -when doing a multipart upload.
    -
    -OCI has max parts limit of 10,000 chunks.
    -
    -Rclone will automatically increase the chunk size when uploading a
    -large file of a known size to stay below this number of chunks limit.
    -
    -
    -Properties:
    -
    -- Config:      max_upload_parts
    -- Env Var:     RCLONE_OOS_MAX_UPLOAD_PARTS
    -- Type:        int
    -- Default:     10000
    -
    -#### --oos-upload-concurrency
    -
    -Concurrency for multipart uploads.
    -
    -This is the number of chunks of the same file that are uploaded
    -concurrently.
    -
    -If you are uploading small numbers of large files over high-speed links
    -and these uploads do not fully utilize your bandwidth, then increasing
    -this may help to speed up the transfers.
    -
    -Properties:
    -
    -- Config:      upload_concurrency
    -- Env Var:     RCLONE_OOS_UPLOAD_CONCURRENCY
    -- Type:        int
    -- Default:     10
    -
    -#### --oos-copy-cutoff
    -
    -Cutoff for switching to multipart copy.
    -
    -Any files larger than this that need to be server-side copied will be
    -copied in chunks of this size.
    -
    -The minimum is 0 and the maximum is 5 GiB.
    -
    -Properties:
    -
    -- Config:      copy_cutoff
    -- Env Var:     RCLONE_OOS_COPY_CUTOFF
    -- Type:        SizeSuffix
    -- Default:     4.656Gi
    -
    -#### --oos-copy-timeout
    -
    -Timeout for copy.
    -
    -Copy is an asynchronous operation, specify timeout to wait for copy to succeed
    -
    -
    -Properties:
    -
    -- Config:      copy_timeout
    -- Env Var:     RCLONE_OOS_COPY_TIMEOUT
    -- Type:        Duration
    -- Default:     1m0s
    -
    -#### --oos-disable-checksum
    -
    -Don't store MD5 checksum with object metadata.
    -
    -Normally rclone will calculate the MD5 checksum of the input before
    -uploading it so it can add it to metadata on the object. This is great
    -for data integrity checking but can cause long delays for large files
    -to start uploading.
    -
    -Properties:
    -
    -- Config:      disable_checksum
    -- Env Var:     RCLONE_OOS_DISABLE_CHECKSUM
    -- Type:        bool
    -- Default:     false
    -
    -#### --oos-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_OOS_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,InvalidUtf8,Dot
    -
    -#### --oos-leave-parts-on-error
    -
    -If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery.
    -
    -It should be set to true for resuming uploads across different sessions.
    -
    -WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add
    -additional costs if not cleaned up.
    -
    -
    -Properties:
    -
    -- Config:      leave_parts_on_error
    -- Env Var:     RCLONE_OOS_LEAVE_PARTS_ON_ERROR
    -- Type:        bool
    -- Default:     false
    -
    -#### --oos-attempt-resume-upload
    -
    -If true attempt to resume previously started multipart upload for the object.
    -This will be helpful to speed up multipart transfers by resuming uploads from past session.
    -
    -WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is 
    -aborted and a new multipart upload is started with the new chunk size.
    -
    -The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully.
    -
    -
    -Properties:
    -
    -- Config:      attempt_resume_upload
    -- Env Var:     RCLONE_OOS_ATTEMPT_RESUME_UPLOAD
    -- Type:        bool
    -- Default:     false
    -
    -#### --oos-no-check-bucket
    -
    -If set, don't attempt to check the bucket exists or create it.
    -
    -This can be useful when trying to minimise the number of transactions
    -rclone does if you know the bucket exists already.
    -
    -It can also be needed if the user you are using does not have bucket
    -creation permissions.
    -
    -
    -Properties:
    -
    -- Config:      no_check_bucket
    -- Env Var:     RCLONE_OOS_NO_CHECK_BUCKET
    -- Type:        bool
    -- Default:     false
    -
    -#### --oos-sse-customer-key-file
    -
    -To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
    -with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'
    -
    -Properties:
    -
    -- Config:      sse_customer_key_file
    -- Env Var:     RCLONE_OOS_SSE_CUSTOMER_KEY_FILE
    -- Type:        string
    -- Required:    false
    -- Examples:
    -    - ""
    -        - None
    -
    -#### --oos-sse-customer-key
    -
    -To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
    -encrypt or  decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is
    -needed. For more information, see Using Your Own Keys for Server-Side Encryption 
    -(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)
    -
    -Properties:
    -
    -- Config:      sse_customer_key
    -- Env Var:     RCLONE_OOS_SSE_CUSTOMER_KEY
    -- Type:        string
    -- Required:    false
    -- Examples:
    -    - ""
    -        - None
    -
    -#### --oos-sse-customer-key-sha256
    -
    -If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
    -key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for 
    -Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
    -
    -Properties:
    -
    -- Config:      sse_customer_key_sha256
    -- Env Var:     RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256
    -- Type:        string
    -- Required:    false
    -- Examples:
    -    - ""
    -        - None
    -
    -#### --oos-sse-kms-key-id
    -
    -if using your own master key in vault, this header specifies the
    -OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call
    -the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key.
    -Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
    -
    -Properties:
    -
    -- Config:      sse_kms_key_id
    -- Env Var:     RCLONE_OOS_SSE_KMS_KEY_ID
    -- Type:        string
    -- Required:    false
    -- Examples:
    -    - ""
    -        - None
    -
    -#### --oos-sse-customer-algorithm
    -
    -If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm.
    -Object Storage supports "AES256" as the encryption algorithm. For more information, see
    -Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
    -
    -Properties:
    -
    -- Config:      sse_customer_algorithm
    -- Env Var:     RCLONE_OOS_SSE_CUSTOMER_ALGORITHM
    -- Type:        string
    -- Required:    false
    -- Examples:
    -    - ""
    -        - None
    -    - "AES256"
    -        - AES256
    -
    -## Backend commands
    -
    -Here are the commands specific to the oracleobjectstorage backend.
    -
    -Run them with
    -
    -    rclone backend COMMAND remote:
    -
    -The help below will explain what arguments each command takes.
    -
    -See the [backend](https://rclone.org/commands/rclone_backend/) command for more
    -info on how to pass options and arguments.
    -
    -These can be run on a running backend using the rc command
    -[backend/command](https://rclone.org/rc/#backend-command).
    -
    -### rename
    -
    -change the name of an object
    -
    -    rclone backend rename remote: [options] [<arguments>+]
    -
    -This command can be used to rename a object.
    -
    -Usage Examples:
    -
    -    rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
    -
    -
    -### list-multipart-uploads
    -
    -List the unfinished multipart uploads
    -
    -    rclone backend list-multipart-uploads remote: [options] [<arguments>+]
    -
    -This command lists the unfinished multipart uploads in JSON format.
    -
    -    rclone backend list-multipart-uploads oos:bucket/path/to/object
    -
    -It returns a dictionary of buckets with values as lists of unfinished
    -multipart uploads.
    -
    -You can call it with no bucket in which case it lists all bucket, with
    -a bucket or with a bucket and path.
    -
    +### Metadata
    +
    +OneDrive supports System Metadata (not User Metadata, as of this writing) for
    +both files and directories. Much of the metadata is read-only, and there are some
    +differences between OneDrive Personal and Business (see table below for
    +details).
    +
    +Permissions are also supported, if `--onedrive-metadata-permissions` is set. The
    +accepted values for `--onedrive-metadata-permissions` are `read`, `write`,
    +`read,write`, and `off` (the default). `write` supports adding new permissions,
    +updating the "role" of existing permissions, and removing permissions. Updating
    +and removing require the Permission ID to be known, so it is recommended to use
    +`read,write` instead of `write` if you wish to update/remove permissions.
    +
    +Permissions are read/written in JSON format using the same schema as the
    +[OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online),
    +which differs slightly between OneDrive Personal and Business.
    +
    +Example for OneDrive Personal:
    +```json
    +[
         {
    -      "test-bucket": [
    -                {
    -                        "namespace": "test-namespace",
    -                        "bucket": "test-bucket",
    -                        "object": "600m.bin",
    -                        "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
    -                        "timeCreated": "2022-07-29T06:21:16.595Z",
    -                        "storageTier": "Standard"
    -                }
    -        ]
    +        "id": "1234567890ABC!123",
    +        "grantedTo": {
    +            "user": {
    +                "id": "ryan@contoso.com"
    +            },
    +            "application": {},
    +            "device": {}
    +        },
    +        "invitation": {
    +            "email": "ryan@contoso.com"
    +        },
    +        "link": {
    +            "webUrl": "https://1drv.ms/t/s!1234567890ABC"
    +        },
    +        "roles": [
    +            "read"
    +        ],
    +        "shareId": "s!1234567890ABC"
    +    }
    +]
    +

    Example for OneDrive Business:

    +
    [
    +    {
    +        "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
    +        "grantedToIdentities": [
    +            {
    +                "user": {
    +                    "displayName": "ryan@contoso.com"
    +                },
    +                "application": {},
    +                "device": {}
    +            }
    +        ],
    +        "link": {
    +            "type": "view",
    +            "scope": "users",
    +            "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
    +        },
    +        "roles": [
    +            "read"
    +        ],
    +        "shareId": "u!LKj1lkdlals90j1nlkascl"
    +    },
    +    {
    +        "id": "5D33DD65C6932946",
    +        "grantedTo": {
    +            "user": {
    +                "displayName": "John Doe",
    +                "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
    +            },
    +            "application": {},
    +            "device": {}
    +        },
    +        "roles": [
    +            "owner"
    +        ],
    +        "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
    +    }
    +]
    +

    To write permissions, pass in a "permissions" metadata key using this same format. The --metadata-mapper tool can be very helpful for this.

    +

    When adding permissions, an email address can be provided in the User.ID or DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an ObjectID can be provided in User.ID. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if Link.Scope is set to "anonymous".

    +

    Example request to add a "read" permission:

    +
    [
    +    {
    +            "id": "",
    +            "grantedTo": {
    +                    "user": {},
    +                    "application": {},
    +                    "device": {}
    +            },
    +            "grantedToIdentities": [
    +                    {
    +                            "user": {
    +                                    "id": "ryan@contoso.com"
    +                            },
    +                            "application": {},
    +                            "device": {}
    +                    }
    +            ],
    +            "roles": [
    +                    "read"
    +            ]
    +    }
    +]
    +

    Note that adding a permission can fail if a conflicting permission already exists for the file/folder.

    +

    To update an existing permission, include both the Permission ID and the new roles to be assigned. roles is the only property that can be changed.

    +

    To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.)

    +

    Note that both reading and writing permissions requires extra API calls, so if you don't need to read or write permissions it is recommended to omit --onedrive-metadata-permissions.

    +

    Metadata and permissions are supported for Folders (directories) as well as Files. Note that setting the mtime or btime on a Folder requires one extra API call on OneDrive Business only.

    +

    OneDrive does not currently support User Metadata. When writing metadata, only writeable system properties will be written -- any read-only or unrecognized keys passed in will be ignored.

    +

    TIP: to see the metadata and permissions for any file or folder, run:

    +
    rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read
    +

    Here are the possible system metadata items for the onedrive backend.

    + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    NameHelpTypeExampleRead Only
    btimeTime of file birth (creation) with S accuracy (mS for OneDrive Personal).RFC 33392006-01-02T15:04:05ZN
    content-typeThe MIME type of the file.stringtext/plainY
    created-by-display-nameDisplay name of the user that created the item.stringJohn DoeY
    created-by-idID of the user that created the item.string48d31887-5fad-4d73-a9f5-3c356e68a038Y
    descriptionA short description of the file. Max 1024 characters. Only supported for OneDrive Personal.stringContract for signingN
    idThe unique identifier of the item within OneDrive.string01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36KY
    last-modified-by-display-nameDisplay name of the user that last modified the item.stringJohn DoeY
    last-modified-by-idID of the user that last modified the item.string48d31887-5fad-4d73-a9f5-3c356e68a038Y
    malware-detectedWhether OneDrive has detected that the item contains malware.booleantrueY
    mtimeTime of last modification with S accuracy (mS for OneDrive Personal).RFC 33392006-01-02T15:04:05ZN
    package-typeIf present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others.stringoneNoteY
    permissionsPermissions in a JSON dump of OneDrive format. Enable with --onedrive-metadata-permissions. Properties: id, grantedTo, grantedToIdentities, invitation, inheritedFrom, link, roles, shareIdJSON{}N
    shared-by-idID of the user that shared the item (if shared).string48d31887-5fad-4d73-a9f5-3c356e68a038Y
    shared-owner-idID of the owner of the shared item (if shared).string48d31887-5fad-4d73-a9f5-3c356e68a038Y
    shared-scopeIf shared, indicates the scope of how the item is shared: anonymous, organization, or users.stringusersY
    shared-timeTime when the item was shared, with S accuracy (mS for OneDrive Personal).RFC 33392006-01-02T15:04:05ZY
    utimeTime of upload with S accuracy (mS for OneDrive Personal).RFC 33392006-01-02T15:04:05ZY
    +

    See the metadata docs for more info.

    +

    Limitations

    +

    If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get a new token and refresh token.

    +

    Naming

    +

    Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    +

    There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    +

    File sizes

    +

    The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business (Updated 13 Jan 2021).

    +

    Path length

    +

    The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.

    +

    Number of files

    +

    OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:. See #2707 for more info.

    +

    An official document about the limitations for different types of OneDrive can be found here.

    +

    Versions

    +

    Every change in a file OneDrive causes the service to create a new version of the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space.

    +

    For example the copy command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version.

    +

    You can use the rclone cleanup command (see below) to remove all old versions.

    +

    Or you can set the no_versions parameter to true and rclone will remove versions after operations which create new versions. This takes extra transactions so only enable it if you need it.

    +

    Note At the time of writing Onedrive Personal creates versions (but not for setting the modification time) but the API for removing them returns "API not found" so cleanup and no_versions should not be used on Onedrive Personal.

    +

    Disabling versioning

    +

    Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an update to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting:

    +
      +
    1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case you haven't installed this already)
    2. +
    3. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking
    4. +
    5. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this will prompt for your credentials)
    6. +
    7. Set-SPOTenant -EnableMinimumVersionRequirement $False
    8. +
    9. Disconnect-SPOService (to disconnect from the server)
    10. +
    +

    Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.

    +

    User Weropol has found a method to disable versioning on OneDrive

    +
      +
    1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
    2. +
    3. Click Site settings.
    4. +
    5. Once on the Site settings page, navigate to Site Administration > Site libraries and lists.
    6. +
    7. Click Customize "Documents".
    8. +
    9. Click General Settings > Versioning Settings.
    10. +
    11. Under Document Version History select the option No versioning. Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe.
    12. +
    13. Apply the changes by clicking OK.
    14. +
    15. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
    16. +
    17. Restore the versioning settings after using rclone. (Optional)
    18. +
    +

    Cleanup

    +

    OneDrive supports rclone cleanup which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports --interactive/i or --dry-run which is a great way to see what it would do.

    +
    rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
    +rclone cleanup remote:path/subdir               # unconditionally remove all old version for path/subdir
    +

    NB Onedrive personal can't currently delete versions

    +

    Troubleshooting

    +

    Excessive throttling or blocked on SharePoint

    +

    If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: --user-agent "ISV|rclone.org|rclone/v1.55.1"

    +

    The specific details can be found in the Microsoft document: Avoid getting throttled or blocked in SharePoint Online

    +

    Unexpected file size/hash differences on Sharepoint

    +

    It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments:

    +
    --ignore-checksum --ignore-size
    +

    Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for OneDrive and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above.

    +

    Replacing/deleting existing files on Sharepoint gets "item not found"

    +

    It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:

    +
    --backup-dir mysharepoint:rclone-backup-dir
    +

    access_denied (AADSTS65005)

    +
    Error: access_denied
    +Code: AADSTS65005
    +Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
    +

    This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.

    +

    However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint

    +

    invalid_grant (AADSTS50076)

    +
    Error: invalid_grant
    +Code: AADSTS50076
    +Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.
    +

    If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run rclone config, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: Already have a token - refresh?. For this question, answer y and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.

    + +

    On Sharepoint and OneDrive for Business, rclone link may return an "Invalid request" error. A possible cause is that the organisation admin didn't allow public links to be made for the organisation/sharepoint library. To fix the permissions as an admin, take a look at the docs: 1, 2.

    +

    Can not access Shared with me files

    +

    Shared with me files is not supported by rclone currently, but there is a workaround:

    +
      +
    1. Visit https://onedrive.live.com
    2. +
    3. Right click a item in Shared, then click Add shortcut to My files in the context make_shortcut
    4. +
    5. The shortcut will appear in My files, you can access it with rclone, it behaves like a normal folder/file. in_my_files rclone_mount
    6. +
    +

    Live Photos uploaded from iOS (small video clips in .heic files)

    +

    The iOS OneDrive app introduced upload and storage of Live Photos in 2020. The usage and download of these uploaded Live Photos is unfortunately still work-in-progress and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.

    +

    The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.

    +

    The different sizes will cause rclone copy/sync to repeatedly recopy unmodified photos something like this:

    +
    DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
    +DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
    +INFO  : 20230203_123826234_iOS.heic: Copied (replaced existing)
    +

    These recopies can be worked around by adding --ignore-size. Please note that this workaround only syncs the still-picture not the movie clip, and relies on modification dates being correctly updated on all files in all situations.

    +

    The different sizes will also cause rclone check to report size errors something like this:

    +
    ERROR : 20230203_123826234_iOS.heic: sizes differ
    +

    These check errors can be suppressed by adding --ignore-size.

    +

    The different sizes will also cause rclone mount to fail downloading with an error something like this:

    +
    ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
    +

    or like this when using --cache-mode=full:

    +
    INFO  : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
    +ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
    +

    OpenDrive

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    n) New remote
    +d) Delete remote
    +q) Quit config
    +e/n/d/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / OpenDrive
    +   \ "opendrive"
    +[snip]
    +Storage> opendrive
    +Username
    +username>
    +Password
    +y) Yes type in my own password
    +g) Generate random password
    +y/g> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +--------------------
    +[remote]
    +username =
    +password = *** ENCRYPTED ***
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    List directories in top level of your OpenDrive

    +
    rclone lsd remote:
    +

    List all the files in your OpenDrive

    +
    rclone ls remote:
    +

    To copy a local directory to an OpenDrive directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Modification times and hashes

    +

    OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    +

    The MD5 hash algorithm is supported.

    +

    Restricted filename characters

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CharacterValueReplacement
    NUL0x00
    /0x2F
    "0x22
    *0x2A
    :0x3A
    <0x3C
    >0x3E
    ?0x3F
    \0x5C
    |0x7C
    +

    File names can also not begin or end with the following characters. These only get replaced if they are the first or last character in the name:

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    CharacterValueReplacement
    SP0x20
    HT0x09
    LF0x0A
    VT0x0B
    CR0x0D
    +

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    +

    Standard options

    +

    Here are the Standard options specific to opendrive (OpenDrive).

    +

    --opendrive-username

    +

    Username.

    +

    Properties:

    + +

    --opendrive-password

    +

    Password.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to opendrive (OpenDrive).

    +

    --opendrive-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --opendrive-chunk-size

    +

    Files will be uploaded in chunks this size.

    +

    Note that these chunks are buffered in memory so increasing them will increase memory use.

    +

    Properties:

    + +

    --opendrive-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    +

    Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    +

    There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    +

    rclone about is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    +

    See List of backends that do not support rclone about and rclone about

    +

    Oracle Object Storage

    + +

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    +

    Sample command to transfer local artifacts to remote:bucket in oracle object storage:

    +

    rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv

    +

    Configuration

    +

    Here is an example of making an oracle object storage configuration. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    n) New remote
    +d) Delete remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +e/n/d/r/c/s/q> n
     
    +Enter name for new remote.
    +name> remote
     
    -### cleanup
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    +XX / Oracle Cloud Infrastructure Object Storage
    +   \ (oracleobjectstorage)
    +Storage> oracleobjectstorage
     
    -Remove unfinished multipart uploads.
    +Option provider.
    +Choose your Auth Provider
    +Choose a number from below, or type in your own string value.
    +Press Enter for the default (env_auth).
    + 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins
    +   \ (env_auth)
    +   / use an OCI user and an API key for authentication.
    + 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
    +   | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
    +   \ (user_principal_auth)
    +   / use instance principals to authorize an instance to make API calls.
    + 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
    +   | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
    +   \ (instance_principal_auth)
    +   / use workload identity to grant Kubernetes pods policy-driven access to Oracle Cloud
    + 4 | Infrastructure (OCI) resources using OCI Identity and Access Management (IAM).
    +   | https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm
    +   \ (workload_identity_auth)
    + 5 / use resource principals to make API calls
    +   \ (resource_principal_auth)
    + 6 / no credentials needed, this is typically for reading public buckets
    +   \ (no_auth)
    +provider> 2
     
    -    rclone backend cleanup remote: [options] [<arguments>+]
    +Option namespace.
    +Object storage namespace
    +Enter a value.
    +namespace> idbamagbg734
     
    -This command removes unfinished multipart uploads of age greater than
    -max-age which defaults to 24 hours.
    +Option compartment.
    +Object storage compartment OCID
    +Enter a value.
    +compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
     
    -Note that you can use --interactive/-i or --dry-run with this command to see what
    -it would do.
    +Option region.
    +Object storage Region
    +Enter a value.
    +region> us-ashburn-1
     
    -    rclone backend cleanup oos:bucket/path/to/object
    -    rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
    +Option endpoint.
    +Endpoint for Object storage API.
    +Leave blank to use the default endpoint for the region.
    +Enter a value. Press Enter to leave empty.
    +endpoint> 
     
    -Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
    +Option config_file.
    +Full Path to OCI config file
    +Choose a number from below, or type in your own string value.
    +Press Enter for the default (~/.oci/config).
    + 1 / oci configuration file location
    +   \ (~/.oci/config)
    +config_file> /etc/oci/dev.conf
     
    +Option config_profile.
    +Profile name inside OCI config file
    +Choose a number from below, or type in your own string value.
    +Press Enter for the default (Default).
    + 1 / Use the default profile
    +   \ (Default)
    +config_profile> Test
     
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    +Configuration complete.
     Options:
    +- type: oracleobjectstorage
    +- namespace: idbamagbg734
    +- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
    +- region: us-ashburn-1
    +- provider: user_principal_auth
    +- config_file: /etc/oci/dev.conf
    +- config_profile: Test
    +Keep this "remote" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    See all buckets

    +
    rclone lsd remote:
    +

    Create a new bucket

    +
    rclone mkdir remote:bucket
    +

    List the contents of a bucket

    +
    rclone ls remote:bucket
    +rclone ls remote:bucket --max-depth 1
    +

    Authentication Providers

    +

    OCI has various authentication methods. To learn more about authentication methods please refer oci authentication methods These choices can be specified in the rclone config file.

    +

    Rclone supports the following OCI authentication provider.

    +
    User Principal
    +Instance Principal
    +Resource Principal
    +Workload Identity
    +No authentication
    +

    User Principal

    +

    Sample rclone config file for Authentication Provider User Principal:

    +
    [oos]
    +type = oracleobjectstorage
    +namespace = id<redacted>34
    +compartment = ocid1.compartment.oc1..aa<redacted>ba
    +region = us-ashburn-1
    +provider = user_principal_auth
    +config_file = /home/opc/.oci/config
    +config_profile = Default
    +

    Advantages: - One can use this method from any server within OCI or on-premises or from other cloud provider.

    +

    Considerations: - you need to configure user’s privileges / policy to allow access to object storage - Overhead of managing users and keys. - If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.

    +

    Instance Principal

    +

    An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. With this approach no credentials have to be stored and managed.

    +

    Sample rclone configuration file for Authentication Provider Instance Principal:

    +
    [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
    +[oos]
    +type = oracleobjectstorage
    +namespace = id<redacted>fn
    +compartment = ocid1.compartment.oc1..aa<redacted>k7a
    +region = us-ashburn-1
    +provider = instance_principal_auth
    +

    Advantages:

    + +

    Considerations:

    + +

    Resource Principal

    +

    Resource principal auth is very similar to instance principal auth but used for resources that are not compute instances such as serverless functions. To use resource principal ensure Rclone process is started with these environment variables set in its process.

    +
    export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
    +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
    +export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
    +export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
    +

    Sample rclone configuration file for Authentication Provider Resource Principal:

    +
    [oos]
    +type = oracleobjectstorage
    +namespace = id<redacted>34
    +compartment = ocid1.compartment.oc1..aa<redacted>ba
    +region = us-ashburn-1
    +provider = resource_principal_auth
    +

    Workload Identity

    +

    Workload Identity auth may be used when running Rclone from Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster. For more details on configuring Workload Identity, see Granting Workloads Access to OCI Resources. To use workload identity, ensure Rclone is started with these environment variables set in its process.

    +
    export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
    +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
    +

    No authentication

    +

    Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication:

    +
    [oos]
    +type = oracleobjectstorage
    +namespace = id<redacted>34
    +compartment = ocid1.compartment.oc1..aa<redacted>ba
    +region = us-ashburn-1
    +provider = no_auth
    +

    Modification times and hashes

    +

    The modification time is stored as metadata on the object as opc-meta-mtime as floating point since the epoch, accurate to 1 ns.

    +

    If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.

    +

    Note that reading this from the object takes an additional HEAD request as the metadata isn't returned in object listings.

    +

    The MD5 hash algorithm is supported.

    +

    Multipart uploads

    +

    rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB.

    +

    Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.

    +

    rclone switches from single part uploads to multipart uploads at the point specified by --oos-upload-cutoff. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).

    +

    The chunk sizes used in the multipart upload are specified by --oos-chunk-size and the number of chunks uploaded concurrently is specified by --oos-upload-concurrency.

    +

    Multipart uploads will use --transfers * --oos-upload-concurrency * --oos-chunk-size extra memory. Single part uploads to not use extra memory.

    +

    Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster.

    +

    Increasing --oos-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --oos-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.

    +

    Standard options

    +

    Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).

    +

    --oos-provider

    +

    Choose your Auth Provider

    +

    Properties:

    + +

    --oos-namespace

    +

    Object storage namespace

    +

    Properties:

    + +

    --oos-compartment

    +

    Object storage compartment OCID

    +

    Properties:

    + +

    --oos-region

    +

    Object storage Region

    +

    Properties:

    + +

    --oos-endpoint

    +

    Endpoint for Object storage API.

    +

    Leave blank to use the default endpoint for the region.

    +

    Properties:

    + +

    --oos-config-file

    +

    Path to OCI config file

    +

    Properties:

    + +

    --oos-config-profile

    +

    Profile name inside the oci config file

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).

    +

    --oos-storage-tier

    +

    The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm

    +

    Properties:

    + +

    --oos-upload-cutoff

    +

    Cutoff for switching to chunked upload.

    +

    Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.

    +

    Properties:

    + +

    --oos-chunk-size

    +

    Chunk size to use for uploading.

    +

    When uploading files larger than upload_cutoff or files with unknown size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will be uploaded as multipart uploads using this chunk size.

    +

    Note that "upload_concurrency" chunks of this size are buffered in memory per transfer.

    +

    If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.

    +

    Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit.

    +

    Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size.

    +

    Increasing the chunk size decreases the accuracy of the progress statistics displayed with "-P" flag.

    +

    Properties:

    + +

    --oos-max-upload-parts

    +

    Maximum number of parts in a multipart upload.

    +

    This option defines the maximum number of multipart chunks to use when doing a multipart upload.

    +

    OCI has max parts limit of 10,000 chunks.

    +

    Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit.

    +

    Properties:

    + +

    --oos-upload-concurrency

    +

    Concurrency for multipart uploads.

    +

    This is the number of chunks of the same file that are uploaded concurrently.

    +

    If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

    +

    Properties:

    + +

    --oos-copy-cutoff

    +

    Cutoff for switching to multipart copy.

    +

    Any files larger than this that need to be server-side copied will be copied in chunks of this size.

    +

    The minimum is 0 and the maximum is 5 GiB.

    +

    Properties:

    + +

    --oos-copy-timeout

    +

    Timeout for copy.

    +

    Copy is an asynchronous operation, specify timeout to wait for copy to succeed

    +

    Properties:

    + +

    --oos-disable-checksum

    +

    Don't store MD5 checksum with object metadata.

    +

    Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.

    +

    Properties:

    + +

    --oos-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --oos-leave-parts-on-error

    +

    If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery.

    +

    It should be set to true for resuming uploads across different sessions.

    +

    WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add additional costs if not cleaned up.

    +

    Properties:

    + +

    --oos-attempt-resume-upload

    +

    If true attempt to resume previously started multipart upload for the object. This will be helpful to speed up multipart transfers by resuming uploads from past session.

    +

    WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is aborted and a new multipart upload is started with the new chunk size.

    +

    The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully.

    +

    Properties:

    + +

    --oos-no-check-bucket

    +

    If set, don't attempt to check the bucket exists or create it.

    +

    This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.

    +

    It can also be needed if the user you are using does not have bucket creation permissions.

    +

    Properties:

    + +

    --oos-sse-customer-key-file

    +

    To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'

    +

    Properties:

    + +

    --oos-sse-customer-key

    +

    To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)

    +

    Properties:

    + +

    --oos-sse-customer-key-sha256

    +

    If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).

    +

    Properties:

    + +

    --oos-sse-kms-key-id

    +

    if using your own master key in vault, this header specifies the OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.

    +

    Properties:

    + +

    --oos-sse-customer-algorithm

    +

    If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm. Object Storage supports "AES256" as the encryption algorithm. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).

    +

    Properties:

    + +

    --oos-description

    +

    Description of the remote

    +

    Properties:

    + +

    Backend commands

    +

    Here are the commands specific to the oracleobjectstorage backend.

    +

    Run them with

    +
    rclone backend COMMAND remote:
    +

    The help below will explain what arguments each command takes.

    +

    See the backend command for more info on how to pass options and arguments.

    +

    These can be run on a running backend using the rc command backend/command.

    +

    rename

    +

    change the name of an object

    +
    rclone backend rename remote: [options] [<arguments>+]
    +

    This command can be used to rename a object.

    +

    Usage Examples:

    +
    rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
    +

    list-multipart-uploads

    +

    List the unfinished multipart uploads

    +
    rclone backend list-multipart-uploads remote: [options] [<arguments>+]
    +

    This command lists the unfinished multipart uploads in JSON format.

    +
    rclone backend list-multipart-uploads oos:bucket/path/to/object
    +

    It returns a dictionary of buckets with values as lists of unfinished multipart uploads.

    +

    You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.

    +
    {
    +  "test-bucket": [
    +            {
    +                    "namespace": "test-namespace",
    +                    "bucket": "test-bucket",
    +                    "object": "600m.bin",
    +                    "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
    +                    "timeCreated": "2022-07-29T06:21:16.595Z",
    +                    "storageTier": "Standard"
    +            }
    +    ]
    +

    cleanup

    +

    Remove unfinished multipart uploads.

    +
    rclone backend cleanup remote: [options] [<arguments>+]
    +

    This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.

    +

    Note that you can use --interactive/-i or --dry-run with this command to see what it would do.

    +
    rclone backend cleanup oos:bucket/path/to/object
    +rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
    +

    Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.

    +

    Options:

    + +

    restore

    +

    Restore objects from Archive to Standard storage

    +
    rclone backend restore remote: [options] [<arguments>+]
    +

    This command can be used to restore one or more objects from Archive to Standard storage.

    +
    Usage Examples:
     
    -- "max-age": Max age of upload to delete
    +rclone backend restore oos:bucket/path/to/directory -o hours=HOURS
    +rclone backend restore oos:bucket -o hours=HOURS
    +

    This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags

    +
    rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72
    +

    All the objects shown will be marked for restore, then

    +
    rclone backend restore --include "*.txt" oos:bucket/path -o hours=72
     
    +It returns a list of status dictionaries with Object Name and Status
    +keys. The Status will be "RESTORED"" if it was successful or an error message
    +if not.
     
    -
    -## Tutorials
    -### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/)
    -
    -#  QingStor
    -
    -Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
    -command.)  You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
    -
    -## Configuration
    -
    -Here is an example of making an QingStor configuration.  First run
    -
    -    rclone config
    -
    -This will guide you through an interactive setup process.
    -
    -

    No remotes found, make a new one? n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / QingStor Object Storage  "qingstor" [snip] Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step  "false" 2 / Get QingStor credentials from the environment (env vars or IAM)  "true" env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> access_key QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> secret_key Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" endpoint> Zone connect to. Default is "pek3a". Choose a number from below, or type in your own value / The Beijing (China) Three Zone 1 | Needs location constraint pek3a.  "pek3a" / The Shanghai (China) First Zone 2 | Needs location constraint sh1a.  "sh1a" zone> 1 Number of connection retry. Leave blank will use the default value "3". connection_retries> Remote config -------------------- [remote] env_auth = false access_key_id = access_key secret_access_key = secret_key endpoint = zone = pek3a connection_retries = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -This remote is called `remote` and can now be used like this
    -
    -See all buckets
    -
    -    rclone lsd remote:
    -
    -Make a new bucket
    -
    -    rclone mkdir remote:bucket
    -
    -List the contents of a bucket
    -
    -    rclone ls remote:bucket
    -
    -Sync `/home/local/directory` to the remote bucket, deleting any excess
    -files in the bucket.
    -
    -    rclone sync --interactive /home/local/directory remote:bucket
    -
    -### --fast-list
    -
    -This remote supports `--fast-list` which allows you to use fewer
    -transactions in exchange for more memory. See the [rclone
    -docs](https://rclone.org/docs/#fast-list) for more details.
    -
    -### Multipart uploads
    -
    -rclone supports multipart uploads with QingStor which means that it can
    -upload files bigger than 5 GiB. Note that files uploaded with multipart
    -upload don't have an MD5SUM.
    -
    -Note that incomplete multipart uploads older than 24 hours can be
    -removed with `rclone cleanup remote:bucket` just for one bucket
    -`rclone cleanup remote:` for all buckets. QingStor does not ever
    -remove incomplete multipart uploads so it may be necessary to run this
    -from time to time.
    -
    -### Buckets and Zone
    -
    -With QingStor you can list buckets (`rclone lsd`) using any zone,
    -but you can only access the content of a bucket from the zone it was
    -created in.  If you attempt to access a bucket from the wrong zone,
    -you will get an error, `incorrect zone, the bucket is not in 'XXX'
    -zone`.
    -
    -### Authentication
    -
    -There are two ways to supply `rclone` with a set of QingStor
    -credentials. In order of precedence:
    -
    - - Directly in the rclone configuration file (as configured by `rclone config`)
    -   - set `access_key_id` and `secret_access_key`
    - - Runtime configuration:
    -   - set `env_auth` to `true` in the config file
    -   - Exporting the following environment variables before running `rclone`
    -     - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY`
    -     - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY`
    -
    -### Restricted filename characters
    -
    -The control characters 0x00-0x1F and / are replaced as in the [default
    -restricted characters set](https://rclone.org/overview/#restricted-characters).  Note
    -that 0x7F is not replaced.
    -
    -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    -as they can't be used in JSON strings.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to qingstor (QingCloud Object Storage).
    -
    -#### --qingstor-env-auth
    -
    -Get QingStor credentials from runtime.
    -
    -Only applies if access_key_id and secret_access_key is blank.
    -
    -Properties:
    -
    -- Config:      env_auth
    -- Env Var:     RCLONE_QINGSTOR_ENV_AUTH
    -- Type:        bool
    -- Default:     false
    -- Examples:
    -    - "false"
    -        - Enter QingStor credentials in the next step.
    -    - "true"
    -        - Get QingStor credentials from the environment (env vars or IAM).
    -
    -#### --qingstor-access-key-id
    -
    -QingStor Access Key ID.
    -
    -Leave blank for anonymous access or runtime credentials.
    -
    -Properties:
    -
    -- Config:      access_key_id
    -- Env Var:     RCLONE_QINGSTOR_ACCESS_KEY_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --qingstor-secret-access-key
    -
    -QingStor Secret Access Key (password).
    -
    -Leave blank for anonymous access or runtime credentials.
    -
    -Properties:
    -
    -- Config:      secret_access_key
    -- Env Var:     RCLONE_QINGSTOR_SECRET_ACCESS_KEY
    -- Type:        string
    -- Required:    false
    -
    -#### --qingstor-endpoint
    -
    +[
    +    {
    +        "Object": "test.txt"
    +        "Status": "RESTORED",
    +    },
    +    {
    +        "Object": "test/file4.txt"
    +        "Status": "RESTORED",
    +    }
    +]
    +

    Options:

    + +

    Tutorials

    +

    Mounting Buckets

    +

    QingStor

    +

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    +

    Configuration

    +

    Here is an example of making an QingStor configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found, make a new one?
    +n) New remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +n/r/c/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / QingStor Object Storage
    +   \ "qingstor"
    +[snip]
    +Storage> qingstor
    +Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own value
    + 1 / Enter QingStor credentials in the next step
    +   \ "false"
    + 2 / Get QingStor credentials from the environment (env vars or IAM)
    +   \ "true"
    +env_auth> 1
    +QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
    +access_key_id> access_key
    +QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
    +secret_access_key> secret_key
     Enter an endpoint URL to connection QingStor API.
    +Leave blank will use the default value "https://qingstor.com:443"
    +endpoint>
    +Zone connect to. Default is "pek3a".
    +Choose a number from below, or type in your own value
    +   / The Beijing (China) Three Zone
    + 1 | Needs location constraint pek3a.
    +   \ "pek3a"
    +   / The Shanghai (China) First Zone
    + 2 | Needs location constraint sh1a.
    +   \ "sh1a"
    +zone> 1
    +Number of connection retry.
    +Leave blank will use the default value "3".
    +connection_retries>
    +Remote config
    +--------------------
    +[remote]
    +env_auth = false
    +access_key_id = access_key
    +secret_access_key = secret_key
    +endpoint =
    +zone = pek3a
    +connection_retries =
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called remote and can now be used like this

    +

    See all buckets

    +
    rclone lsd remote:
    +

    Make a new bucket

    +
    rclone mkdir remote:bucket
    +

    List the contents of a bucket

    +
    rclone ls remote:bucket
    +

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    +
    rclone sync --interactive /home/local/directory remote:bucket
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    Multipart uploads

    +

    rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5 GiB. Note that files uploaded with multipart upload don't have an MD5SUM.

    +

    Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket just for one bucket rclone cleanup remote: for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.

    +

    Buckets and Zone

    +

    With QingStor you can list buckets (rclone lsd) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone.

    +

    Authentication

    +

    There are two ways to supply rclone with a set of QingStor credentials. In order of precedence:

    + +

    Restricted filename characters

    +

    The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.

    +

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    +

    Standard options

    +

    Here are the Standard options specific to qingstor (QingCloud Object Storage).

    +

    --qingstor-env-auth

    +

    Get QingStor credentials from runtime.

    +

    Only applies if access_key_id and secret_access_key is blank.

    +

    Properties:

    + +

    --qingstor-access-key-id

    +

    QingStor Access Key ID.

    +

    Leave blank for anonymous access or runtime credentials.

    +

    Properties:

    + +

    --qingstor-secret-access-key

    +

    QingStor Secret Access Key (password).

    +

    Leave blank for anonymous access or runtime credentials.

    +

    Properties:

    + +

    --qingstor-endpoint

    +

    Enter an endpoint URL to connection QingStor API.

    +

    Leave blank will use the default value "https://qingstor.com:443".

    +

    Properties:

    + +

    --qingstor-zone

    +

    Zone to connect to.

    +

    Default is "pek3a".

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to qingstor (QingCloud Object Storage).

    +

    --qingstor-connection-retries

    +

    Number of connection retries.

    +

    Properties:

    + +

    --qingstor-upload-cutoff

    +

    Cutoff for switching to chunked upload.

    +

    Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.

    +

    Properties:

    + +

    --qingstor-chunk-size

    +

    Chunk size to use for uploading.

    +

    When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.

    +

    Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer.

    +

    If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.

    +

    Properties:

    + +

    --qingstor-upload-concurrency

    +

    Concurrency for multipart uploads.

    +

    This is the number of chunks of the same file that are uploaded concurrently.

    +

    NB if you set this to > 1 then the checksums of multipart uploads become corrupted (the uploads themselves are not corrupted though).

    +

    If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.

    +

    Properties:

    + +

    --qingstor-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --qingstor-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    +

    rclone about is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    +

    See List of backends that do not support rclone about and rclone about

    +

    Quatrix

    +

    Quatrix by Maytech is Quatrix Secure Compliant File Sharing | Maytech.

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, e.g., remote:directory/subdirectory.

    +

    The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at https://<account>/profile/api-keys or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.

    +

    See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer

    +

    Configuration

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Quatrix by Maytech
    +   \ "quatrix"
    +[snip]
    +Storage> quatrix
    +API key for accessing Quatrix account.
    +api_key> your_api_key
    +Host name of Quatrix account.
    +host> example.quatrix.it
     
    -Leave blank will use the default value "https://qingstor.com:443".
    +--------------------
    +[remote]
    +api_key = your_api_key
    +host = example.quatrix.it
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your Quatrix

    +
    rclone lsd remote:
    +

    List all the files in your Quatrix

    +
    rclone ls remote:
    +

    To copy a local directory to an Quatrix directory called backup

    +
    rclone copy /home/source remote:backup
    +

    API key validity

    +

    API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can update it in rclone config. The same happens if the hostname was changed.

    +
    $ rclone config
    +Current remotes:
     
    -Properties:
    +Name                 Type
    +====                 ====
    +remote               quatrix
     
    -- Config:      endpoint
    -- Env Var:     RCLONE_QINGSTOR_ENDPOINT
    -- Type:        string
    -- Required:    false
    -
    -#### --qingstor-zone
    -
    -Zone to connect to.
    -
    -Default is "pek3a".
    -
    -Properties:
    -
    -- Config:      zone
    -- Env Var:     RCLONE_QINGSTOR_ZONE
    -- Type:        string
    -- Required:    false
    -- Examples:
    -    - "pek3a"
    -        - The Beijing (China) Three Zone.
    -        - Needs location constraint pek3a.
    -    - "sh1a"
    -        - The Shanghai (China) First Zone.
    -        - Needs location constraint sh1a.
    -    - "gd2a"
    -        - The Guangdong (China) Second Zone.
    -        - Needs location constraint gd2a.
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to qingstor (QingCloud Object Storage).
    -
    -#### --qingstor-connection-retries
    -
    -Number of connection retries.
    -
    -Properties:
    -
    -- Config:      connection_retries
    -- Env Var:     RCLONE_QINGSTOR_CONNECTION_RETRIES
    -- Type:        int
    -- Default:     3
    -
    -#### --qingstor-upload-cutoff
    -
    -Cutoff for switching to chunked upload.
    -
    -Any files larger than this will be uploaded in chunks of chunk_size.
    -The minimum is 0 and the maximum is 5 GiB.
    -
    -Properties:
    -
    -- Config:      upload_cutoff
    -- Env Var:     RCLONE_QINGSTOR_UPLOAD_CUTOFF
    -- Type:        SizeSuffix
    -- Default:     200Mi
    -
    -#### --qingstor-chunk-size
    -
    -Chunk size to use for uploading.
    -
    -When uploading files larger than upload_cutoff they will be uploaded
    -as multipart uploads using this chunk size.
    -
    -Note that "--qingstor-upload-concurrency" chunks of this size are buffered
    -in memory per transfer.
    -
    -If you are transferring large files over high-speed links and you have
    -enough memory, then increasing this will speed up the transfers.
    -
    -Properties:
    -
    -- Config:      chunk_size
    -- Env Var:     RCLONE_QINGSTOR_CHUNK_SIZE
    -- Type:        SizeSuffix
    -- Default:     4Mi
    -
    -#### --qingstor-upload-concurrency
    -
    -Concurrency for multipart uploads.
    -
    -This is the number of chunks of the same file that are uploaded
    -concurrently.
    -
    -NB if you set this to > 1 then the checksums of multipart uploads
    -become corrupted (the uploads themselves are not corrupted though).
    -
    -If you are uploading small numbers of large files over high-speed links
    -and these uploads do not fully utilize your bandwidth, then increasing
    -this may help to speed up the transfers.
    -
    -Properties:
    -
    -- Config:      upload_concurrency
    -- Env Var:     RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
    -- Type:        int
    -- Default:     1
    -
    -#### --qingstor-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_QINGSTOR_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,Ctl,InvalidUtf8
    -
    -
    -
    -## Limitations
    -
    -`rclone about` is not supported by the qingstor backend. Backends without
    -this capability cannot determine free space for an rclone mount or
    -use policy `mfs` (most free space) as a member of an rclone union
    -remote.
    -
    -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
    -
    -#  Quatrix
    -
    -Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business).
    -
    -Paths are specified as `remote:path`
    -
    -Paths may be as deep as required, e.g., `remote:directory/subdirectory`.
    -
    -The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at `https://<account>/profile/api-keys`
    -or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.
    -
    -See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer
    -
    -## Configuration
    -
    -Here is an example of how to make a remote called `remote`.  First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Quatrix by Maytech  "quatrix" [snip] Storage> quatrix API key for accessing Quatrix account. api_key> your_api_key Host name of Quatrix account. host> example.quatrix.it

    - --- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    [remote] api_key = your_api_key host = example.quatrix.it
    y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ```
    Once configured you can then use rclone like this,
    List directories in top level of your Quatrix
    rclone lsd remote:
    List all the files in your Quatrix
    rclone ls remote:
    To copy a local directory to an Quatrix directory called backup
    rclone copy /home/source remote:backup
    ### API key validity
    API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can update it in rclone config. The same happens if the hostname was changed.
    ``` $ rclone config Current remotes:
    Name Type ==== ==== remote quatrix
    e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in an existing value 1 > remote remote> remote
    -

    [remote] type = quatrix host = some_host.quatrix.it api_key = your_api_key -------------------- Edit remote Option api_key. API key for accessing Quatrix account Enter a string value. Press Enter for the default (your_api_key) api_key> Option host. Host name of Quatrix account Enter a string value. Press Enter for the default (some_host.quatrix.it).

    - --- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    [remote] type = quatrix host = some_host.quatrix.it api_key = your_api_key
    y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ```
    ### Modification times and hashes
    Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not.
    Quatrix does not support hashes, so you cannot use the --checksum flag.
    ### Restricted filename characters
    File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to . or .. nor contain / , \ or non-printable ascii.
    ### Transfers
    For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size and minimal_chunk_size * transfers), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size.
    ### Deleting files
    Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.
    ### Standard options
    Here are the Standard options specific to quatrix (Quatrix by Maytech).
    #### --quatrix-api-key
    API key for accessing Quatrix account
    Properties:
    - Config: api_key - Env Var: RCLONE_QUATRIX_API_KEY - Type: string - Required: true
    #### --quatrix-host
    Host name of Quatrix account
    Properties:
    - Config: host - Env Var: RCLONE_QUATRIX_HOST - Type: string - Required: true
    ### Advanced options
    Here are the Advanced options specific to quatrix (Quatrix by Maytech).
    #### --quatrix-encoding
    The encoding for the backend.
    See the encoding section in the overview for more info.
    Properties:
    - Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
    #### --quatrix-effective-upload-time
    Wanted upload time for one chunk
    Properties:
    - Config: effective_upload_time - Env Var: RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME - Type: string - Default: "4s"
    #### --quatrix-minimal-chunk-size
    The minimal size for one chunk
    Properties:
    - Config: minimal_chunk_size - Env Var: RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE - Type: SizeSuffix - Default: 9.537Mi
    #### --quatrix-maximal-summary-chunk-size
    The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size'
    Properties:
    - Config: maximal_summary_chunk_size - Env Var: RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE - Type: SizeSuffix - Default: 95.367Mi
    #### --quatrix-hard-delete
    Delete files permanently rather than putting them into the trash.
    Properties:
    - Config: hard_delete - Env Var: RCLONE_QUATRIX_HARD_DELETE - Type: bool - Default: false
    ## Storage usage
    The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail. This can be fixed by freeing up the space or increasing the quota.
    ## Server-side operations
    Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation.
    # Sia
    Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.
    ## Introduction
    Before you can use rclone with Sia, you will need to have a running copy of Sia-UI or siad (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started guide and install one.
    rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).
    However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.
    Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.
    ## Configuration
    Here is an example of how to make a sia remote called mySia. First, run:
    rclone config
    This will guide you through an interactive setup process:
    ``` No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> mySia Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value ... 29 / Sia Decentralized Cloud  "sia" ... Storage> sia Sia daemon API URL, like http://sia.daemon.host:9980. Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost. Enter a string value. Press Enter for the default ("http://127.0.0.1:9980"). api_url> http://127.0.0.1:9980 Sia Daemon API Password. Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> y Enter the password: password: Confirm the password: password: Edit advanced config? y) Yes n) No (default) y/n> n
    -

    [mySia] type = sia api_url = http://127.0.0.1:9980 api_password = *** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -Once configured, you can then use `rclone` like this:
    -
    -- List directories in top level of your Sia storage
    -
    -

    rclone lsd mySia:

    -
    
    -- List all the files in your Sia storage
    -
    -

    rclone ls mySia:

    -
    
    -- Upload a local directory to the Sia directory called _backup_
    -
    -

    rclone copy /home/source mySia:backup

    -
    
    -
    -### Standard options
    -
    -Here are the Standard options specific to sia (Sia Decentralized Cloud).
    -
    -#### --sia-api-url
    +e) Edit existing remote
    +n) New remote
    +d) Delete remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +e/n/d/r/c/s/q> e
    +Choose a number from below, or type in an existing value
    + 1 > remote
    +remote> remote
    +--------------------
    +[remote]
    +type = quatrix
    +host = some_host.quatrix.it
    +api_key = your_api_key
    +--------------------
    +Edit remote
    +Option api_key.
    +API key for accessing Quatrix account
    +Enter a string value. Press Enter for the default (your_api_key)
    +api_key>
    +Option host.
    +Host name of Quatrix account
    +Enter a string value. Press Enter for the default (some_host.quatrix.it).
     
    +--------------------
    +[remote]
    +type = quatrix
    +host = some_host.quatrix.it
    +api_key = your_api_key
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Modification times and hashes

    +

    Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not.

    +

    Quatrix does not support hashes, so you cannot use the --checksum flag.

    +

    Restricted filename characters

    +

    File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to . or .. nor contain / , \ or non-printable ascii.

    +

    Transfers

    +

    For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size and minimal_chunk_size * transfers), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size.

    +

    Deleting files

    +

    Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.

    +

    Standard options

    +

    Here are the Standard options specific to quatrix (Quatrix by Maytech).

    +

    --quatrix-api-key

    +

    API key for accessing Quatrix account

    +

    Properties:

    + +

    --quatrix-host

    +

    Host name of Quatrix account

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to quatrix (Quatrix by Maytech).

    +

    --quatrix-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --quatrix-effective-upload-time

    +

    Wanted upload time for one chunk

    +

    Properties:

    + +

    --quatrix-minimal-chunk-size

    +

    The minimal size for one chunk

    +

    Properties:

    + +

    --quatrix-maximal-summary-chunk-size

    +

    The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size'

    +

    Properties:

    + +

    --quatrix-hard-delete

    +

    Delete files permanently rather than putting them into the trash

    +

    Properties:

    + +

    --quatrix-skip-project-folders

    +

    Skip project folders in operations

    +

    Properties:

    + +

    --quatrix-description

    +

    Description of the remote

    +

    Properties:

    + +

    Storage usage

    +

    The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail. This can be fixed by freeing up the space or increasing the quota.

    +

    Server-side operations

    +

    Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation.

    +

    Sia

    +

    Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.

    +

    Introduction

    +

    Before you can use rclone with Sia, you will need to have a running copy of Sia-UI or siad (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started guide and install one.

    +

    rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).

    +

    However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.

    +

    Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.

    +

    Configuration

    +

    Here is an example of how to make a sia remote called mySia. First, run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> mySia
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +...
    +29 / Sia Decentralized Cloud
    +   \ "sia"
    +...
    +Storage> sia
     Sia daemon API URL, like http://sia.daemon.host:9980.
    -
     Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
     Keep default if Sia daemon runs on localhost.
    -
    -Properties:
    -
    -- Config:      api_url
    -- Env Var:     RCLONE_SIA_API_URL
    -- Type:        string
    -- Default:     "http://127.0.0.1:9980"
    -
    -#### --sia-api-password
    -
    +Enter a string value. Press Enter for the default ("http://127.0.0.1:9980").
    +api_url> http://127.0.0.1:9980
     Sia Daemon API Password.
    -
     Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      api_password
    -- Env Var:     RCLONE_SIA_API_PASSWORD
    -- Type:        string
    -- Required:    false
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to sia (Sia Decentralized Cloud).
    -
    -#### --sia-user-agent
    -
    -Siad User Agent
    -
    -Sia daemon requires the 'Sia-Agent' user agent by default for security
    -
    -Properties:
    -
    -- Config:      user_agent
    -- Env Var:     RCLONE_SIA_USER_AGENT
    -- Type:        string
    -- Default:     "Sia-Agent"
    -
    -#### --sia-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_SIA_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
    -
    -
    -
    -## Limitations
    -
    -- Modification times not supported
    -- Checksums not supported
    -- `rclone about` not supported
    -- rclone can work only with _Siad_ or _Sia-UI_ at the moment,
    -  the **SkyNet daemon is not supported yet.**
    -- Sia does not allow control characters or symbols like question and pound
    -  signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding)
    -  them for you, but you'd better be aware
    -
    -#  Swift
    -
    -Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/).
    -Commercial implementations of that being:
    -
    -  * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/)
    -  * [Memset Memstore](https://www.memset.com/cloud/storage/)
    -  * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/)
    -  * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html)
    -  * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/)
    -  * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
    -
    -Paths are specified as `remote:container` (or `remote:` for the `lsd`
    -command.)  You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
    -
    -## Configuration
    -
    -Here is an example of making a swift configuration.  First run
    -
    -    rclone config
    -
    -This will guide you through an interactive setup process.
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)  "swift" [snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value 1 / Enter swift credentials in the next step  "false" 2 / Get swift credentials from environment vars. Leave other fields blank if using this.  "true" env_auth> true User name to log in (OS_USERNAME). user> API key or password (OS_PASSWORD). key> Authentication URL for server (OS_AUTH_URL). Choose a number from below, or type in your own value 1 / Rackspace US  "https://auth.api.rackspacecloud.com/v1.0" 2 / Rackspace UK  "https://lon.auth.api.rackspacecloud.com/v1.0" 3 / Rackspace v2  "https://identity.api.rackspacecloud.com/v2.0" 4 / Memset Memstore UK  "https://auth.storage.memset.com/v1.0" 5 / Memset Memstore UK v2  "https://auth.storage.memset.com/v2.0" 6 / OVH  "https://auth.cloud.ovh.net/v3" 7 / Blomp Cloud Storage  "https://authenticate.ain.net" auth> User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> Region name - optional (OS_REGION_NAME) region> Storage URL - optional (OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) Choose a number from below, or type in your own value 1 / Public (default, choose this if not sure)  "public" 2 / Internal (use internal service net)  "internal" 3 / Admin  "admin" endpoint_type> Remote config -------------------- [test] env_auth = true user = key = auth = user_id = domain = tenant = tenant_id = tenant_domain = region = storage_url = auth_token = auth_version = endpoint_type = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -This remote is called `remote` and can now be used like this
    -
    -See all containers
    -
    -    rclone lsd remote:
    -
    -Make a new container
    -
    -    rclone mkdir remote:container
    -
    -List the contents of a container
    -
    -    rclone ls remote:container
    -
    -Sync `/home/local/directory` to the remote container, deleting any
    -excess files in the container.
    -
    -    rclone sync --interactive /home/local/directory remote:container
    -
    -### Configuration from an OpenStack credentials file
    -
    -An OpenStack credentials file typically looks something something
    -like this (without the comments)
    -
    -

    export OS_AUTH_URL=https://a.provider.net/v2.0 export OS_TENANT_ID=ffffffffffffffffffffffffffffffff export OS_TENANT_NAME="1234567890123456" export OS_USERNAME="123abc567xy" echo "Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="SBG1" if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

    -
    
    -The config file needs to look something like this where `$OS_USERNAME`
    -represents the value of the `OS_USERNAME` variable - `123abc567xy` in
    -the example above.
    -
    -

    [remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = $OS_AUTH_URL tenant = $OS_TENANT_NAME

    -
    
    -Note that you may (or may not) need to set `region` too - try without first.
    -
    -### Configuration from the environment
    -
    -If you prefer you can configure rclone to use swift using a standard
    -set of OpenStack environment variables.
    -
    -When you run through the config, make sure you choose `true` for
    -`env_auth` and leave everything else blank.
    -
    -rclone will then set any empty config parameters from the environment
    -using standard OpenStack environment variables.  There is [a list of
    -the
    -variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment)
    -in the docs for the swift library.
    -
    -### Using an alternate authentication method
    -
    -If your OpenStack installation uses a non-standard authentication method
    -that might not be yet supported by rclone or the underlying swift library, 
    -you can authenticate externally (e.g. calling manually the `openstack` 
    -commands to get a token). Then, you just need to pass the two 
    -configuration variables ``auth_token`` and ``storage_url``. 
    -If they are both provided, the other variables are ignored. rclone will 
    -not try to authenticate but instead assume it is already authenticated 
    -and use these two variables to access the OpenStack installation.
    -
    -#### Using rclone without a config file
    -
    -You can use rclone with swift without a config file, if desired, like
    -this:
    -
    -

    source openstack-credentials-file export RCLONE_CONFIG_MYREMOTE_TYPE=swift export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote:

    -
    
    -### --fast-list
    -
    -This remote supports `--fast-list` which allows you to use fewer
    -transactions in exchange for more memory. See the [rclone
    -docs](https://rclone.org/docs/#fast-list) for more details.
    -
    -### --update and --use-server-modtime
    -
    -As noted below, the modified time is stored on metadata on the object. It is
    -used by default for all operations that require checking the time a file was
    -last updated. It allows rclone to treat the remote more like a true filesystem,
    -but it is inefficient because it requires an extra API call to retrieve the
    -metadata.
    -
    -For many operations, the time the object was last uploaded to the remote is
    -sufficient to determine if it is "dirty". By using `--update` along with
    -`--use-server-modtime`, you can avoid the extra API call and simply upload
    -files whose local modtime is newer than the time it was last uploaded.
    -
    -### Modification times and hashes
    -
    -The modified time is stored as metadata on the object as
    -`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
    -ns.
    -
    -This is a de facto standard (used in the official python-swiftclient
    -amongst others) for storing the modification time for an object.
    -
    -The MD5 hash algorithm is supported.
    -
    -### Restricted filename characters
    -
    -| Character | Value | Replacement |
    -| --------- |:-----:|:-----------:|
    -| NUL       | 0x00  | ␀           |
    -| /         | 0x2F  | /          |
    -
    -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    -as they can't be used in JSON strings.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
    -
    -#### --swift-env-auth
    -
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank (default)
    +y/g/n> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +--------------------
    +[mySia]
    +type = sia
    +api_url = http://127.0.0.1:9980
    +api_password = *** ENCRYPTED ***
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Once configured, you can then use rclone like this:

    + +
    rclone lsd mySia:
    + +
    rclone ls mySia:
    + +
    rclone copy /home/source mySia:backup
    +

    Standard options

    +

    Here are the Standard options specific to sia (Sia Decentralized Cloud).

    +

    --sia-api-url

    +

    Sia daemon API URL, like http://sia.daemon.host:9980.

    +

    Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost.

    +

    Properties:

    + +

    --sia-api-password

    +

    Sia Daemon API Password.

    +

    Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to sia (Sia Decentralized Cloud).

    +

    --sia-user-agent

    +

    Siad User Agent

    +

    Sia daemon requires the 'Sia-Agent' user agent by default for security

    +

    Properties:

    + +

    --sia-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --sia-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    + +

    Swift

    +

    Swift refers to OpenStack Object Storage. Commercial implementations of that being:

    + +

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    +

    Configuration

    +

    Here is an example of making a swift configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)
    +   \ "swift"
    +[snip]
    +Storage> swift
     Get swift credentials from environment variables in standard OpenStack form.
    -
    -Properties:
    -
    -- Config:      env_auth
    -- Env Var:     RCLONE_SWIFT_ENV_AUTH
    -- Type:        bool
    -- Default:     false
    -- Examples:
    -    - "false"
    -        - Enter swift credentials in the next step.
    -    - "true"
    -        - Get swift credentials from environment vars.
    -        - Leave other fields blank if using this.
    -
    -#### --swift-user
    -
    +Choose a number from below, or type in your own value
    + 1 / Enter swift credentials in the next step
    +   \ "false"
    + 2 / Get swift credentials from environment vars. Leave other fields blank if using this.
    +   \ "true"
    +env_auth> true
     User name to log in (OS_USERNAME).
    -
    -Properties:
    -
    -- Config:      user
    -- Env Var:     RCLONE_SWIFT_USER
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-key
    -
    +user> 
     API key or password (OS_PASSWORD).
    -
    -Properties:
    -
    -- Config:      key
    -- Env Var:     RCLONE_SWIFT_KEY
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-auth
    -
    +key> 
     Authentication URL for server (OS_AUTH_URL).
    -
    -Properties:
    -
    -- Config:      auth
    -- Env Var:     RCLONE_SWIFT_AUTH
    -- Type:        string
    -- Required:    false
    -- Examples:
    -    - "https://auth.api.rackspacecloud.com/v1.0"
    -        - Rackspace US
    -    - "https://lon.auth.api.rackspacecloud.com/v1.0"
    -        - Rackspace UK
    -    - "https://identity.api.rackspacecloud.com/v2.0"
    -        - Rackspace v2
    -    - "https://auth.storage.memset.com/v1.0"
    -        - Memset Memstore UK
    -    - "https://auth.storage.memset.com/v2.0"
    -        - Memset Memstore UK v2
    -    - "https://auth.cloud.ovh.net/v3"
    -        - OVH
    -    - "https://authenticate.ain.net"
    -        - Blomp Cloud Storage
    -
    -#### --swift-user-id
    -
    +Choose a number from below, or type in your own value
    + 1 / Rackspace US
    +   \ "https://auth.api.rackspacecloud.com/v1.0"
    + 2 / Rackspace UK
    +   \ "https://lon.auth.api.rackspacecloud.com/v1.0"
    + 3 / Rackspace v2
    +   \ "https://identity.api.rackspacecloud.com/v2.0"
    + 4 / Memset Memstore UK
    +   \ "https://auth.storage.memset.com/v1.0"
    + 5 / Memset Memstore UK v2
    +   \ "https://auth.storage.memset.com/v2.0"
    + 6 / OVH
    +   \ "https://auth.cloud.ovh.net/v3"
    + 7  / Blomp Cloud Storage
    +   \ "https://authenticate.ain.net"
    +auth> 
     User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
    -
    -Properties:
    -
    -- Config:      user_id
    -- Env Var:     RCLONE_SWIFT_USER_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-domain
    -
    +user_id> 
     User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
    -
    -Properties:
    -
    -- Config:      domain
    -- Env Var:     RCLONE_SWIFT_DOMAIN
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-tenant
    -
    -Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).
    -
    -Properties:
    -
    -- Config:      tenant
    -- Env Var:     RCLONE_SWIFT_TENANT
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-tenant-id
    -
    -Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).
    -
    -Properties:
    -
    -- Config:      tenant_id
    -- Env Var:     RCLONE_SWIFT_TENANT_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-tenant-domain
    -
    -Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).
    -
    -Properties:
    -
    -- Config:      tenant_domain
    -- Env Var:     RCLONE_SWIFT_TENANT_DOMAIN
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-region
    -
    -Region name - optional (OS_REGION_NAME).
    -
    -Properties:
    -
    -- Config:      region
    -- Env Var:     RCLONE_SWIFT_REGION
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-storage-url
    -
    -Storage URL - optional (OS_STORAGE_URL).
    -
    -Properties:
    -
    -- Config:      storage_url
    -- Env Var:     RCLONE_SWIFT_STORAGE_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-auth-token
    -
    -Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).
    -
    -Properties:
    -
    -- Config:      auth_token
    -- Env Var:     RCLONE_SWIFT_AUTH_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-application-credential-id
    -
    -Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).
    -
    -Properties:
    -
    -- Config:      application_credential_id
    -- Env Var:     RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-application-credential-name
    -
    -Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).
    -
    -Properties:
    -
    -- Config:      application_credential_name
    -- Env Var:     RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-application-credential-secret
    -
    -Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).
    -
    -Properties:
    -
    -- Config:      application_credential_secret
    -- Env Var:     RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
    -- Type:        string
    -- Required:    false
    -
    -#### --swift-auth-version
    -
    -AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).
    -
    -Properties:
    -
    -- Config:      auth_version
    -- Env Var:     RCLONE_SWIFT_AUTH_VERSION
    -- Type:        int
    -- Default:     0
    -
    -#### --swift-endpoint-type
    -
    -Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).
    -
    -Properties:
    -
    -- Config:      endpoint_type
    -- Env Var:     RCLONE_SWIFT_ENDPOINT_TYPE
    -- Type:        string
    -- Default:     "public"
    -- Examples:
    -    - "public"
    -        - Public (default, choose this if not sure)
    -    - "internal"
    -        - Internal (use internal service net)
    -    - "admin"
    -        - Admin
    -
    -#### --swift-storage-policy
    -
    -The storage policy to use when creating a new container.
    -
    -This applies the specified storage policy when creating a new
    -container. The policy cannot be changed afterwards. The allowed
    -configuration values and their meaning depend on your Swift storage
    -provider.
    -
    -Properties:
    -
    -- Config:      storage_policy
    -- Env Var:     RCLONE_SWIFT_STORAGE_POLICY
    -- Type:        string
    -- Required:    false
    -- Examples:
    -    - ""
    -        - Default
    -    - "pcs"
    -        - OVH Public Cloud Storage
    -    - "pca"
    -        - OVH Public Cloud Archive
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
    -
    -#### --swift-leave-parts-on-error
    -
    -If true avoid calling abort upload on a failure.
    -
    -It should be set to true for resuming uploads across different sessions.
    -
    -Properties:
    -
    -- Config:      leave_parts_on_error
    -- Env Var:     RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR
    -- Type:        bool
    -- Default:     false
    -
    -#### --swift-chunk-size
    -
    -Above this size files will be chunked into a _segments container.
    -
    -Above this size files will be chunked into a _segments container.  The
    -default for this is 5 GiB which is its maximum value.
    -
    -Properties:
    -
    -- Config:      chunk_size
    -- Env Var:     RCLONE_SWIFT_CHUNK_SIZE
    -- Type:        SizeSuffix
    -- Default:     5Gi
    -
    -#### --swift-no-chunk
    -
    -Don't chunk files during streaming upload.
    -
    -When doing streaming uploads (e.g. using rcat or mount) setting this
    -flag will cause the swift backend to not upload chunked files.
    -
    -This will limit the maximum upload size to 5 GiB. However non chunked
    -files are easier to deal with and have an MD5SUM.
    -
    -Rclone will still chunk files bigger than chunk_size when doing normal
    -copy operations.
    -
    -Properties:
    -
    -- Config:      no_chunk
    -- Env Var:     RCLONE_SWIFT_NO_CHUNK
    -- Type:        bool
    -- Default:     false
    -
    -#### --swift-no-large-objects
    -
    -Disable support for static and dynamic large objects
    -
    -Swift cannot transparently store files bigger than 5 GiB. There are
    -two schemes for doing that, static or dynamic large objects, and the
    -API does not allow rclone to determine whether a file is a static or
    -dynamic large object without doing a HEAD on the object. Since these
    -need to be treated differently, this means rclone has to issue HEAD
    -requests for objects for example when reading checksums.
    -
    -When `no_large_objects` is set, rclone will assume that there are no
    -static or dynamic large objects stored. This means it can stop doing
    -the extra HEAD calls which in turn increases performance greatly
    -especially when doing a swift to swift transfer with `--checksum` set.
    -
    -Setting this option implies `no_chunk` and also that no files will be
    -uploaded in chunks, so files bigger than 5 GiB will just fail on
    -upload.
    -
    -If you set this option and there *are* static or dynamic large objects,
    -then this will give incorrect hashes for them. Downloads will succeed,
    -but other operations such as Remove and Copy will fail.
    -
    -
    -Properties:
    -
    -- Config:      no_large_objects
    -- Env Var:     RCLONE_SWIFT_NO_LARGE_OBJECTS
    -- Type:        bool
    -- Default:     false
    -
    -#### --swift-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_SWIFT_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,InvalidUtf8
    -
    -
    -
    -## Limitations
    -
    -The Swift API doesn't return a correct MD5SUM for segmented files
    -(Dynamic or Static Large Objects) so rclone won't check or use the
    -MD5SUM for these.
    -
    -## Troubleshooting
    -
    -### Rclone gives Failed to create file system for "remote:": Bad Request
    -
    -Due to an oddity of the underlying swift library, it gives a "Bad
    -Request" error rather than a more sensible error when the
    -authentication fails for Swift.
    -
    -So this most likely means your username / password is wrong.  You can
    -investigate further with the `--dump-bodies` flag.
    -
    -This may also be caused by specifying the region when you shouldn't
    -have (e.g. OVH).
    -
    -### Rclone gives Failed to create file system: Response didn't have storage url and auth token
    -
    -This is most likely caused by forgetting to specify your tenant when
    -setting up a swift remote.
    -
    -## OVH Cloud Archive
    -
    -To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`.
    -
    -### Uploading Objects
    -
    -Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.
    -
    -### Retrieving Objects
    -
    -To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:
    -
    -`2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)`
    -
    -Rclone will wait for the time specified then retry the copy.
    -
    -#  pCloud
    -
    -Paths are specified as `remote:path`
    -
    -Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
    -
    -## Configuration
    -
    -The initial setup for pCloud involves getting a token from pCloud which you
    -need to do in your browser.  `rclone config` walks you through it.
    -
    -Here is an example of how to make a remote called `remote`.  First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Pcloud  "pcloud" [snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> Pcloud App Client Secret - leave blank normally. client_secret> Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
    -machine with no Internet browser available.
    -
    -Note that rclone runs a webserver on your local machine to collect the
    -token as returned from pCloud. This only runs from the moment it opens
    -your browser to the moment you get back the verification code.  This
    -is on `http://127.0.0.1:53682/` and this it may require you to unblock
    -it temporarily if you are running a host firewall.
    -
    -Once configured you can then use `rclone` like this,
    -
    -List directories in top level of your pCloud
    -
    -    rclone lsd remote:
    -
    -List all the files in your pCloud
    -
    -    rclone ls remote:
    -
    -To copy a local directory to a pCloud directory called backup
    -
    -    rclone copy /home/source remote:backup
    -
    -### Modification times and hashes
    -
    -pCloud allows modification times to be set on objects accurate to 1
    -second.  These will be used to detect whether objects need syncing or
    -not.  In order to set a Modification time pCloud requires the object
    -be re-uploaded.
    -
    -pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256
    -hashes in the EU region, so you can use the `--checksum` flag.
    -
    -### Restricted filename characters
    -
    -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
    -the following characters are also replaced:
    -
    -| Character | Value | Replacement |
    -| --------- |:-----:|:-----------:|
    -| \         | 0x5C  | \          |
    -
    -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    -as they can't be used in JSON strings.
    -
    -### Deleting files
    -
    -Deleted files will be moved to the trash.  Your subscription level
    -will determine how long items stay in the trash.  `rclone cleanup` can
    -be used to empty the trash.
    -
    -### Emptying the trash
    -
    -Due to an API limitation, the `rclone cleanup` command will only work if you 
    -set your username and password in the advanced options for this backend. 
    -Since we generally want to avoid storing user passwords in the rclone config
    -file, we advise you to only set this up if you need the `rclone cleanup` command to work.
    -
    -### Root folder ID
    -
    -You can set the `root_folder_id` for rclone.  This is the directory
    -(identified by its `Folder ID`) that rclone considers to be the root
    -of your pCloud drive.
    -
    -Normally you will leave this blank and rclone will determine the
    -correct root to use itself.
    -
    -However you can set this to restrict rclone to a specific folder
    -hierarchy.
    -
    -In order to do this you will have to find the `Folder ID` of the
    -directory you wish rclone to display.  This will be the `folder` field
    -of the URL when you open the relevant folder in the pCloud web
    -interface.
    -
    -So if the folder you want rclone to use has a URL which looks like
    -`https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid`
    -in the browser, then you use `5xxxxxxxx8` as
    -the `root_folder_id` in the config.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to pcloud (Pcloud).
    -
    -#### --pcloud-client-id
    -
    -OAuth Client Id.
    -
    -Leave blank normally.
    -
    -Properties:
    -
    -- Config:      client_id
    -- Env Var:     RCLONE_PCLOUD_CLIENT_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --pcloud-client-secret
    -
    -OAuth Client Secret.
    -
    -Leave blank normally.
    -
    -Properties:
    -
    -- Config:      client_secret
    -- Env Var:     RCLONE_PCLOUD_CLIENT_SECRET
    -- Type:        string
    -- Required:    false
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to pcloud (Pcloud).
    -
    -#### --pcloud-token
    -
    -OAuth Access Token as a JSON blob.
    -
    -Properties:
    -
    -- Config:      token
    -- Env Var:     RCLONE_PCLOUD_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --pcloud-auth-url
    -
    -Auth server URL.
    -
    -Leave blank to use the provider defaults.
    -
    -Properties:
    -
    -- Config:      auth_url
    -- Env Var:     RCLONE_PCLOUD_AUTH_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --pcloud-token-url
    -
    -Token server url.
    -
    -Leave blank to use the provider defaults.
    -
    -Properties:
    -
    -- Config:      token_url
    -- Env Var:     RCLONE_PCLOUD_TOKEN_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --pcloud-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_PCLOUD_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
    -
    -#### --pcloud-root-folder-id
    -
    -Fill in for rclone to use a non root folder as its starting point.
    -
    -Properties:
    -
    -- Config:      root_folder_id
    -- Env Var:     RCLONE_PCLOUD_ROOT_FOLDER_ID
    -- Type:        string
    -- Default:     "d0"
    -
    -#### --pcloud-hostname
    -
    -Hostname to connect to.
    -
    -This is normally set when rclone initially does the oauth connection,
    -however you will need to set it by hand if you are using remote config
    -with rclone authorize.
    -
    -
    -Properties:
    -
    -- Config:      hostname
    -- Env Var:     RCLONE_PCLOUD_HOSTNAME
    -- Type:        string
    -- Default:     "api.pcloud.com"
    -- Examples:
    -    - "api.pcloud.com"
    -        - Original/US region
    -    - "eapi.pcloud.com"
    -        - EU region
    -
    -#### --pcloud-username
    -
    -Your pcloud username.
    -            
    -This is only required when you want to use the cleanup command. Due to a bug
    -in the pcloud API the required API does not support OAuth authentication so
    -we have to rely on user password authentication for it.
    -
    -Properties:
    -
    -- Config:      username
    -- Env Var:     RCLONE_PCLOUD_USERNAME
    -- Type:        string
    -- Required:    false
    -
    -#### --pcloud-password
    -
    -Your pcloud password.
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      password
    -- Env Var:     RCLONE_PCLOUD_PASSWORD
    -- Type:        string
    -- Required:    false
    -
    -
    -
    -#  PikPak
    -
    -PikPak is [a private cloud drive](https://mypikpak.com/).
    -
    -Paths are specified as `remote:path`, and may be as deep as required, e.g. `remote:directory/subdirectory`.
    -
    -## Configuration
    -
    -Here is an example of making a remote for PikPak.
    -
    -First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n

    -

    Enter name for new remote. name> remote

    -

    Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. XX / PikPak  (pikpak) Storage> XX

    -

    Option user. Pikpak username. Enter a value. user> USERNAME

    -

    Option pass. Pikpak password. Choose an alternative below. y) Yes, type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password:

    -

    Edit advanced config? y) Yes n) No (default) y/n>

    -

    Configuration complete. Options: - type: pikpak - user: USERNAME - pass: *** ENCRYPTED *** - token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"} Keep this "remote" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -### Modification times and hashes
    -
    -PikPak keeps modification times on objects, and updates them when uploading objects,
    -but it does not support changing only the modification time
    -
    -The MD5 hash algorithm is supported.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to pikpak (PikPak).
    -
    -#### --pikpak-user
    -
    -Pikpak username.
    -
    -Properties:
    -
    -- Config:      user
    -- Env Var:     RCLONE_PIKPAK_USER
    -- Type:        string
    -- Required:    true
    -
    -#### --pikpak-pass
    -
    -Pikpak password.
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      pass
    -- Env Var:     RCLONE_PIKPAK_PASS
    -- Type:        string
    -- Required:    true
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to pikpak (PikPak).
    -
    -#### --pikpak-client-id
    -
    -OAuth Client Id.
    -
    -Leave blank normally.
    -
    -Properties:
    -
    -- Config:      client_id
    -- Env Var:     RCLONE_PIKPAK_CLIENT_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --pikpak-client-secret
    -
    -OAuth Client Secret.
    -
    -Leave blank normally.
    -
    -Properties:
    -
    -- Config:      client_secret
    -- Env Var:     RCLONE_PIKPAK_CLIENT_SECRET
    -- Type:        string
    -- Required:    false
    -
    -#### --pikpak-token
    -
    -OAuth Access Token as a JSON blob.
    -
    -Properties:
    -
    -- Config:      token
    -- Env Var:     RCLONE_PIKPAK_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --pikpak-auth-url
    -
    -Auth server URL.
    -
    -Leave blank to use the provider defaults.
    -
    -Properties:
    -
    -- Config:      auth_url
    -- Env Var:     RCLONE_PIKPAK_AUTH_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --pikpak-token-url
    -
    -Token server url.
    -
    -Leave blank to use the provider defaults.
    -
    -Properties:
    -
    -- Config:      token_url
    -- Env Var:     RCLONE_PIKPAK_TOKEN_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --pikpak-root-folder-id
    -
    -ID of the root folder.
    -Leave blank normally.
    -
    -Fill in for rclone to use a non root folder as its starting point.
    -
    -
    -Properties:
    -
    -- Config:      root_folder_id
    -- Env Var:     RCLONE_PIKPAK_ROOT_FOLDER_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --pikpak-use-trash
    -
    -Send files to the trash instead of deleting permanently.
    -
    -Defaults to true, namely sending files to the trash.
    -Use `--pikpak-use-trash=false` to delete files permanently instead.
    -
    -Properties:
    -
    -- Config:      use_trash
    -- Env Var:     RCLONE_PIKPAK_USE_TRASH
    -- Type:        bool
    -- Default:     true
    -
    -#### --pikpak-trashed-only
    -
    -Only show files that are in the trash.
    -
    -This will show trashed files in their original directory structure.
    -
    -Properties:
    -
    -- Config:      trashed_only
    -- Env Var:     RCLONE_PIKPAK_TRASHED_ONLY
    -- Type:        bool
    -- Default:     false
    -
    -#### --pikpak-hash-memory-limit
    -
    -Files bigger than this will be cached on disk to calculate hash if required.
    -
    -Properties:
    -
    -- Config:      hash_memory_limit
    -- Env Var:     RCLONE_PIKPAK_HASH_MEMORY_LIMIT
    -- Type:        SizeSuffix
    -- Default:     10Mi
    -
    -#### --pikpak-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_PIKPAK_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
    -
    -## Backend commands
    -
    -Here are the commands specific to the pikpak backend.
    -
    -Run them with
    -
    -    rclone backend COMMAND remote:
    -
    -The help below will explain what arguments each command takes.
    -
    -See the [backend](https://rclone.org/commands/rclone_backend/) command for more
    -info on how to pass options and arguments.
    -
    -These can be run on a running backend using the rc command
    -[backend/command](https://rclone.org/rc/#backend-command).
    -
    -### addurl
    -
    -Add offline download task for url
    -
    -    rclone backend addurl remote: [options] [<arguments>+]
    -
    -This command adds offline download task for url.
    -
    -Usage:
    -
    -    rclone backend addurl pikpak:dirpath url
    -
    -Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, 
    -download will fallback to default 'My Pack' folder.
    -
    -
    -### decompress
    -
    -Request decompress of a file/files in a folder
    -
    -    rclone backend decompress remote: [options] [<arguments>+]
    -
    -This command requests decompress of file/files in a folder.
    -
    -Usage:
    -
    -    rclone backend decompress pikpak:dirpath {filename} -o password=password
    -    rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
    -
    -An optional argument 'filename' can be specified for a file located in 
    -'pikpak:dirpath'. You may want to pass '-o password=password' for a 
    -password-protected files. Also, pass '-o delete-src-file' to delete 
    -source files after decompression finished.
    -
    -Result:
    -
    -    {
    -        "Decompressed": 17,
    -        "SourceDeleted": 0,
    -        "Errors": 0
    -    }
    -
    -
    -
    -
    -## Limitations
    -
    -### Hashes may be empty
    -
    -PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
    -
    -### Deleted files still visible with trashed-only
    -
    -Deleted files will still be visible with `--pikpak-trashed-only` even after the
    -trash emptied. This goes away after few days.
    -
    -#  premiumize.me
    -
    -Paths are specified as `remote:path`
    -
    -Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
    -
    -## Configuration
    -
    -The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you
    -need to do in your browser.  `rclone config` walks you through it.
    -
    -Here is an example of how to make a remote called `remote`.  First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / premiumize.me  "premiumizeme" [snip] Storage> premiumizeme ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **

    -

    Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = premiumizeme token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d>

    -
    
    -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
    -machine with no Internet browser available.
    -
    -Note that rclone runs a webserver on your local machine to collect the
    -token as returned from premiumize.me. This only runs from the moment it opens
    -your browser to the moment you get back the verification code.  This
    -is on `http://127.0.0.1:53682/` and this it may require you to unblock
    -it temporarily if you are running a host firewall.
    -
    -Once configured you can then use `rclone` like this,
    -
    -List directories in top level of your premiumize.me
    -
    -    rclone lsd remote:
    -
    -List all the files in your premiumize.me
    -
    -    rclone ls remote:
    -
    -To copy a local directory to an premiumize.me directory called backup
    -
    -    rclone copy /home/source remote:backup
    -
    -### Modification times and hashes
    -
    -premiumize.me does not support modification times or hashes, therefore
    -syncing will default to `--size-only` checking.  Note that using
    -`--update` will work.
    -
    -### Restricted filename characters
    -
    -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
    -the following characters are also replaced:
    -
    -| Character | Value | Replacement |
    -| --------- |:-----:|:-----------:|
    -| \         | 0x5C  | \           |
    -| "         | 0x22  | "           |
    -
    -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    -as they can't be used in JSON strings.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to premiumizeme (premiumize.me).
    -
    -#### --premiumizeme-client-id
    -
    -OAuth Client Id.
    -
    -Leave blank normally.
    -
    -Properties:
    -
    -- Config:      client_id
    -- Env Var:     RCLONE_PREMIUMIZEME_CLIENT_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --premiumizeme-client-secret
    -
    -OAuth Client Secret.
    -
    -Leave blank normally.
    -
    -Properties:
    -
    -- Config:      client_secret
    -- Env Var:     RCLONE_PREMIUMIZEME_CLIENT_SECRET
    -- Type:        string
    -- Required:    false
    -
    -#### --premiumizeme-api-key
    -
    -API Key.
    -
    -This is not normally used - use oauth instead.
    -
    -
    -Properties:
    -
    -- Config:      api_key
    -- Env Var:     RCLONE_PREMIUMIZEME_API_KEY
    -- Type:        string
    -- Required:    false
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to premiumizeme (premiumize.me).
    -
    -#### --premiumizeme-token
    -
    -OAuth Access Token as a JSON blob.
    -
    -Properties:
    -
    -- Config:      token
    -- Env Var:     RCLONE_PREMIUMIZEME_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --premiumizeme-auth-url
    -
    -Auth server URL.
    -
    -Leave blank to use the provider defaults.
    -
    -Properties:
    -
    -- Config:      auth_url
    -- Env Var:     RCLONE_PREMIUMIZEME_AUTH_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --premiumizeme-token-url
    -
    -Token server url.
    -
    -Leave blank to use the provider defaults.
    -
    -Properties:
    -
    -- Config:      token_url
    -- Env Var:     RCLONE_PREMIUMIZEME_TOKEN_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --premiumizeme-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_PREMIUMIZEME_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
    -
    -
    -
    -## Limitations
    -
    -Note that premiumize.me is case insensitive so you can't have a file called
    -"Hello.doc" and one called "hello.doc".
    -
    -premiumize.me file names can't have the `\` or `"` characters in.
    -rclone maps these to and from an identical looking unicode equivalents
    -`\` and `"`
    -
    -premiumize.me only supports filenames up to 255 characters in length.
    -
    -#  Proton Drive
    -
    -[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault
    - for your files that protects your data.
    -
    -This is an rclone backend for Proton Drive which supports the file transfer
    -features of Proton Drive using the same client-side encryption.
    -
    -Due to the fact that Proton Drive doesn't publish its API documentation, this 
    -backend is implemented with best efforts by reading the open-sourced client 
    -source code and observing the Proton Drive traffic in the browser.
    -
    -**NB** This backend is currently in Beta. It is believed to be correct
    -and all the integration tests pass. However the Proton Drive protocol
    -has evolved over time there may be accounts it is not compatible
    -with. Please [post on the rclone forum](https://forum.rclone.org/) if
    -you find an incompatibility.
    -
    -Paths are specified as `remote:path`
    -
    -Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
    -
    -## Configurations
    -
    -Here is an example of how to make a remote called `remote`.  First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Proton Drive  "Proton Drive" [snip] Storage> protondrive User name user> you@protonmail.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Option 2fa. 2FA code (if the account requires one) Enter a value. Press Enter to leave empty. 2fa> 123456 Remote config -------------------- [remote] type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -**NOTE:** The Proton Drive encryption keys need to have been already generated 
    -after a regular login via the browser, otherwise attempting to use the 
    -credentials in `rclone` will fail.
    -
    -Once configured you can then use `rclone` like this,
    -
    -List directories in top level of your Proton Drive
    -
    -    rclone lsd remote:
    -
    -List all the files in your Proton Drive
    -
    -    rclone ls remote:
    -
    -To copy a local directory to an Proton Drive directory called backup
    -
    -    rclone copy /home/source remote:backup
    -
    -### Modification times and hashes
    -
    -Proton Drive Bridge does not support updating modification times yet.
    -
    -The SHA1 hash algorithm is supported.
    -
    -### Restricted filename characters
    -
    -Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and 
    -right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51))
    -
    -### Duplicated files
    -
    -Proton Drive can not have two files with exactly the same name and path. If the 
    -conflict occurs, depending on the advanced config, the file might or might not 
    -be overwritten.
    -
    -### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password)
    -
    -Please set your mailbox password in the advanced config section.
    -
    -### Caching
    -
    -The cache is currently built for the case when the rclone is the only instance 
    -performing operations to the mount point. The event system, which is the proton
    -API system that provides visibility of what has changed on the drive, is yet 
    -to be implemented, so updates from other clients won’t be reflected in the 
    -cache. Thus, if there are concurrent clients accessing the same mount point, 
    -then we might have a problem with caching the stale data.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to protondrive (Proton Drive).
    -
    -#### --protondrive-username
    -
    -The username of your proton account
    -
    -Properties:
    -
    -- Config:      username
    -- Env Var:     RCLONE_PROTONDRIVE_USERNAME
    -- Type:        string
    -- Required:    true
    -
    -#### --protondrive-password
    -
    -The password of your proton account.
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      password
    -- Env Var:     RCLONE_PROTONDRIVE_PASSWORD
    -- Type:        string
    -- Required:    true
    -
    -#### --protondrive-2fa
    -
    -The 2FA code
    -
    -The value can also be provided with --protondrive-2fa=000000
    -
    -The 2FA code of your proton drive account if the account is set up with 
    -two-factor authentication
    -
    -Properties:
    -
    -- Config:      2fa
    -- Env Var:     RCLONE_PROTONDRIVE_2FA
    -- Type:        string
    -- Required:    false
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to protondrive (Proton Drive).
    -
    -#### --protondrive-mailbox-password
    -
    -The mailbox password of your two-password proton account.
    -
    -For more information regarding the mailbox password, please check the 
    -following official knowledge base article: 
    -https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
    -
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      mailbox_password
    -- Env Var:     RCLONE_PROTONDRIVE_MAILBOX_PASSWORD
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-client-uid
    -
    -Client uid key (internal use only)
    -
    -Properties:
    -
    -- Config:      client_uid
    -- Env Var:     RCLONE_PROTONDRIVE_CLIENT_UID
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-client-access-token
    -
    -Client access token key (internal use only)
    -
    -Properties:
    -
    -- Config:      client_access_token
    -- Env Var:     RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-client-refresh-token
    -
    -Client refresh token key (internal use only)
    -
    -Properties:
    -
    -- Config:      client_refresh_token
    -- Env Var:     RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-client-salted-key-pass
    -
    -Client salted key pass key (internal use only)
    -
    -Properties:
    -
    -- Config:      client_salted_key_pass
    -- Env Var:     RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_PROTONDRIVE_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
    -
    -#### --protondrive-original-file-size
    -
    -Return the file size before encryption
    -            
    -The size of the encrypted file will be different from (bigger than) the 
    -original file size. Unless there is a reason to return the file size 
    -after encryption is performed, otherwise, set this option to true, as 
    -features like Open() which will need to be supplied with original content 
    -size, will fail to operate properly
    -
    -Properties:
    -
    -- Config:      original_file_size
    -- Env Var:     RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE
    -- Type:        bool
    -- Default:     true
    -
    -#### --protondrive-app-version
    -
    -The app version string 
    -
    -The app version string indicates the client that is currently performing 
    -the API request. This information is required and will be sent with every 
    -API request.
    -
    -Properties:
    -
    -- Config:      app_version
    -- Env Var:     RCLONE_PROTONDRIVE_APP_VERSION
    -- Type:        string
    -- Default:     "macos-drive@1.0.0-alpha.1+rclone"
    -
    -#### --protondrive-replace-existing-draft
    -
    -Create a new revision when filename conflict is detected
    -
    -When a file upload is cancelled or failed before completion, a draft will be 
    -created and the subsequent upload of the same file to the same location will be 
    -reported as a conflict.
    -
    -The value can also be set by --protondrive-replace-existing-draft=true
    -
    -If the option is set to true, the draft will be replaced and then the upload 
    -operation will restart. If there are other clients also uploading at the same 
    -file location at the same time, the behavior is currently unknown. Need to set 
    -to true for integration tests.
    -If the option is set to false, an error "a draft exist - usually this means a 
    -file is being uploaded at another client, or, there was a failed upload attempt" 
    -will be returned, and no upload will happen.
    -
    -Properties:
    -
    -- Config:      replace_existing_draft
    -- Env Var:     RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT
    -- Type:        bool
    -- Default:     false
    -
    -#### --protondrive-enable-caching
    -
    -Caches the files and folders metadata to reduce API calls
    -
    -Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, 
    -as the current implementation doesn't update or clear the cache when there are 
    -external changes. 
    -
    -The files and folders on ProtonDrive are represented as links with keyrings, 
    -which can be cached to improve performance and be friendly to the API server.
    -
    -The cache is currently built for the case when the rclone is the only instance 
    -performing operations to the mount point. The event system, which is the proton
    -API system that provides visibility of what has changed on the drive, is yet 
    -to be implemented, so updates from other clients won’t be reflected in the 
    -cache. Thus, if there are concurrent clients accessing the same mount point, 
    -then we might have a problem with caching the stale data.
    -
    -Properties:
    -
    -- Config:      enable_caching
    -- Env Var:     RCLONE_PROTONDRIVE_ENABLE_CACHING
    -- Type:        bool
    -- Default:     true
    -
    -
    -
    -## Limitations
    -
    -This backend uses the 
    -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which 
    -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a 
    -fork of the [official repo](https://github.com/ProtonMail/go-proton-api).
    -
    -There is no official API documentation available from Proton Drive. But, thanks 
    -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) 
    -and the web, iOS, and Android client codebases, we don't need to completely 
    -reverse engineer the APIs by observing the web client traffic! 
    -
    -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic 
    -building blocks of API calls and error handling, such as 429 exponential 
    -back-off, but it is pretty much just a barebone interface to the Proton API. 
    -For example, the encryption and decryption of the Proton Drive file are not 
    -provided in this library. 
    -
    -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on 
    -top of this quickly. This codebase handles the intricate tasks before and after 
    -calling Proton APIs, particularly the complex encryption scheme, allowing 
    -developers to implement features for other software on top of this codebase. 
    -There are likely quite a few errors in this library, as there isn't official 
    -documentation available.
    -
    -#  put.io
    -
    -Paths are specified as `remote:path`
    -
    -put.io paths may be as deep as required, e.g.
    -`remote:directory/subdirectory`.
    -
    -## Configuration
    -
    -The initial setup for put.io involves getting a token from put.io
    -which you need to do in your browser.  `rclone config` walks you
    -through it.
    -
    -Here is an example of how to make a remote called `remote`.  First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> putio Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Put.io  "putio" [snip] Storage> putio ** See help for putio backend at: https://rclone.org/putio/ **

    -

    Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [putio] type = putio token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes:

    -

    Name Type ==== ==== putio putio

    -
      -
    1. Edit existing remote
    2. -
    3. New remote
    4. -
    5. Delete remote
    6. -
    7. Rename remote
    8. -
    9. Copy remote
    10. -
    11. Set configuration password
    12. -
    13. Quit config e/n/d/r/c/s/q> q
    14. -
    -
    
    -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
    -machine with no Internet browser available.
    -
    -Note that rclone runs a webserver on your local machine to collect the
    -token as returned from put.io  if using web browser to automatically 
    -authenticate. This only
    -runs from the moment it opens your browser to the moment you get back
    -the verification code.  This is on `http://127.0.0.1:53682/` and this
    -it may require you to unblock it temporarily if you are running a host
    -firewall, or use manual mode.
    -
    -You can then use it like this,
    -
    -List directories in top level of your put.io
    -
    -    rclone lsd remote:
    -
    -List all the files in your put.io
    -
    -    rclone ls remote:
    -
    -To copy a local directory to a put.io directory called backup
    -
    -    rclone copy /home/source remote:backup
    -
    -### Restricted filename characters
    -
    -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
    -the following characters are also replaced:
    -
    -| Character | Value | Replacement |
    -| --------- |:-----:|:-----------:|
    -| \         | 0x5C  | \           |
    -
    -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    -as they can't be used in JSON strings.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to putio (Put.io).
    -
    -#### --putio-client-id
    -
    -OAuth Client Id.
    -
    -Leave blank normally.
    -
    -Properties:
    -
    -- Config:      client_id
    -- Env Var:     RCLONE_PUTIO_CLIENT_ID
    -- Type:        string
    -- Required:    false
    -
    -#### --putio-client-secret
    -
    -OAuth Client Secret.
    -
    -Leave blank normally.
    -
    -Properties:
    -
    -- Config:      client_secret
    -- Env Var:     RCLONE_PUTIO_CLIENT_SECRET
    -- Type:        string
    -- Required:    false
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to putio (Put.io).
    -
    -#### --putio-token
    -
    -OAuth Access Token as a JSON blob.
    -
    -Properties:
    -
    -- Config:      token
    -- Env Var:     RCLONE_PUTIO_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --putio-auth-url
    -
    -Auth server URL.
    -
    -Leave blank to use the provider defaults.
    -
    -Properties:
    -
    -- Config:      auth_url
    -- Env Var:     RCLONE_PUTIO_AUTH_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --putio-token-url
    -
    -Token server url.
    -
    -Leave blank to use the provider defaults.
    -
    -Properties:
    -
    -- Config:      token_url
    -- Env Var:     RCLONE_PUTIO_TOKEN_URL
    -- Type:        string
    -- Required:    false
    -
    -#### --putio-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_PUTIO_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
    -
    -
    -
    -## Limitations
    -
    -put.io has rate limiting. When you hit a limit, rclone automatically
    -retries after waiting the amount of time requested by the server.
    -
    -If you want to avoid ever hitting these limits, you may use the
    -`--tpslimit` flag with a low number. Note that the imposed limits
    -may be different for different operations, and may change over time.
    -
    -#  Proton Drive
    -
    -[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault
    - for your files that protects your data.
    -
    -This is an rclone backend for Proton Drive which supports the file transfer
    -features of Proton Drive using the same client-side encryption.
    -
    -Due to the fact that Proton Drive doesn't publish its API documentation, this 
    -backend is implemented with best efforts by reading the open-sourced client 
    -source code and observing the Proton Drive traffic in the browser.
    -
    -**NB** This backend is currently in Beta. It is believed to be correct
    -and all the integration tests pass. However the Proton Drive protocol
    -has evolved over time there may be accounts it is not compatible
    -with. Please [post on the rclone forum](https://forum.rclone.org/) if
    -you find an incompatibility.
    -
    -Paths are specified as `remote:path`
    -
    -Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
    -
    -## Configurations
    -
    -Here is an example of how to make a remote called `remote`.  First run:
    -
    -     rclone config
    -
    -This will guide you through an interactive setup process:
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Proton Drive  "Proton Drive" [snip] Storage> protondrive User name user> you@protonmail.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Option 2fa. 2FA code (if the account requires one) Enter a value. Press Enter to leave empty. 2fa> 123456 Remote config -------------------- [remote] type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -**NOTE:** The Proton Drive encryption keys need to have been already generated 
    -after a regular login via the browser, otherwise attempting to use the 
    -credentials in `rclone` will fail.
    -
    -Once configured you can then use `rclone` like this,
    -
    -List directories in top level of your Proton Drive
    -
    -    rclone lsd remote:
    -
    -List all the files in your Proton Drive
    -
    -    rclone ls remote:
    -
    -To copy a local directory to an Proton Drive directory called backup
    -
    -    rclone copy /home/source remote:backup
    -
    -### Modification times and hashes
    -
    -Proton Drive Bridge does not support updating modification times yet.
    -
    -The SHA1 hash algorithm is supported.
    -
    -### Restricted filename characters
    -
    -Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and 
    -right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51))
    -
    -### Duplicated files
    -
    -Proton Drive can not have two files with exactly the same name and path. If the 
    -conflict occurs, depending on the advanced config, the file might or might not 
    -be overwritten.
    -
    -### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password)
    -
    -Please set your mailbox password in the advanced config section.
    -
    -### Caching
    -
    -The cache is currently built for the case when the rclone is the only instance 
    -performing operations to the mount point. The event system, which is the proton
    -API system that provides visibility of what has changed on the drive, is yet 
    -to be implemented, so updates from other clients won’t be reflected in the 
    -cache. Thus, if there are concurrent clients accessing the same mount point, 
    -then we might have a problem with caching the stale data.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to protondrive (Proton Drive).
    -
    -#### --protondrive-username
    -
    -The username of your proton account
    -
    -Properties:
    -
    -- Config:      username
    -- Env Var:     RCLONE_PROTONDRIVE_USERNAME
    -- Type:        string
    -- Required:    true
    -
    -#### --protondrive-password
    -
    -The password of your proton account.
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      password
    -- Env Var:     RCLONE_PROTONDRIVE_PASSWORD
    -- Type:        string
    -- Required:    true
    -
    -#### --protondrive-2fa
    -
    -The 2FA code
    -
    -The value can also be provided with --protondrive-2fa=000000
    -
    -The 2FA code of your proton drive account if the account is set up with 
    -two-factor authentication
    -
    -Properties:
    -
    -- Config:      2fa
    -- Env Var:     RCLONE_PROTONDRIVE_2FA
    -- Type:        string
    -- Required:    false
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to protondrive (Proton Drive).
    -
    -#### --protondrive-mailbox-password
    -
    -The mailbox password of your two-password proton account.
    -
    -For more information regarding the mailbox password, please check the 
    -following official knowledge base article: 
    -https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
    -
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      mailbox_password
    -- Env Var:     RCLONE_PROTONDRIVE_MAILBOX_PASSWORD
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-client-uid
    -
    -Client uid key (internal use only)
    -
    -Properties:
    -
    -- Config:      client_uid
    -- Env Var:     RCLONE_PROTONDRIVE_CLIENT_UID
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-client-access-token
    -
    -Client access token key (internal use only)
    -
    -Properties:
    -
    -- Config:      client_access_token
    -- Env Var:     RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-client-refresh-token
    -
    -Client refresh token key (internal use only)
    -
    -Properties:
    -
    -- Config:      client_refresh_token
    -- Env Var:     RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-client-salted-key-pass
    -
    -Client salted key pass key (internal use only)
    -
    -Properties:
    -
    -- Config:      client_salted_key_pass
    -- Env Var:     RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS
    -- Type:        string
    -- Required:    false
    -
    -#### --protondrive-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_PROTONDRIVE_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
    -
    -#### --protondrive-original-file-size
    -
    -Return the file size before encryption
    -            
    -The size of the encrypted file will be different from (bigger than) the 
    -original file size. Unless there is a reason to return the file size 
    -after encryption is performed, otherwise, set this option to true, as 
    -features like Open() which will need to be supplied with original content 
    -size, will fail to operate properly
    -
    -Properties:
    -
    -- Config:      original_file_size
    -- Env Var:     RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE
    -- Type:        bool
    -- Default:     true
    -
    -#### --protondrive-app-version
    -
    -The app version string 
    -
    -The app version string indicates the client that is currently performing 
    -the API request. This information is required and will be sent with every 
    -API request.
    -
    -Properties:
    -
    -- Config:      app_version
    -- Env Var:     RCLONE_PROTONDRIVE_APP_VERSION
    -- Type:        string
    -- Default:     "macos-drive@1.0.0-alpha.1+rclone"
    -
    -#### --protondrive-replace-existing-draft
    -
    -Create a new revision when filename conflict is detected
    -
    -When a file upload is cancelled or failed before completion, a draft will be 
    -created and the subsequent upload of the same file to the same location will be 
    -reported as a conflict.
    -
    -The value can also be set by --protondrive-replace-existing-draft=true
    -
    -If the option is set to true, the draft will be replaced and then the upload 
    -operation will restart. If there are other clients also uploading at the same 
    -file location at the same time, the behavior is currently unknown. Need to set 
    -to true for integration tests.
    -If the option is set to false, an error "a draft exist - usually this means a 
    -file is being uploaded at another client, or, there was a failed upload attempt" 
    -will be returned, and no upload will happen.
    -
    -Properties:
    -
    -- Config:      replace_existing_draft
    -- Env Var:     RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT
    -- Type:        bool
    -- Default:     false
    -
    -#### --protondrive-enable-caching
    -
    -Caches the files and folders metadata to reduce API calls
    -
    -Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, 
    -as the current implementation doesn't update or clear the cache when there are 
    -external changes. 
    -
    -The files and folders on ProtonDrive are represented as links with keyrings, 
    -which can be cached to improve performance and be friendly to the API server.
    -
    -The cache is currently built for the case when the rclone is the only instance 
    -performing operations to the mount point. The event system, which is the proton
    -API system that provides visibility of what has changed on the drive, is yet 
    -to be implemented, so updates from other clients won’t be reflected in the 
    -cache. Thus, if there are concurrent clients accessing the same mount point, 
    -then we might have a problem with caching the stale data.
    -
    -Properties:
    -
    -- Config:      enable_caching
    -- Env Var:     RCLONE_PROTONDRIVE_ENABLE_CACHING
    -- Type:        bool
    -- Default:     true
    -
    -
    -
    -## Limitations
    -
    -This backend uses the 
    -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which 
    -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a 
    -fork of the [official repo](https://github.com/ProtonMail/go-proton-api).
    -
    -There is no official API documentation available from Proton Drive. But, thanks 
    -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) 
    -and the web, iOS, and Android client codebases, we don't need to completely 
    -reverse engineer the APIs by observing the web client traffic! 
    -
    -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic 
    -building blocks of API calls and error handling, such as 429 exponential 
    -back-off, but it is pretty much just a barebone interface to the Proton API. 
    -For example, the encryption and decryption of the Proton Drive file are not 
    -provided in this library. 
    -
    -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on 
    -top of this quickly. This codebase handles the intricate tasks before and after 
    -calling Proton APIs, particularly the complex encryption scheme, allowing 
    -developers to implement features for other software on top of this codebase. 
    -There are likely quite a few errors in this library, as there isn't official 
    -documentation available.
    -
    -#  Seafile
    -
    -This is a backend for the [Seafile](https://www.seafile.com/) storage service:
    -- It works with both the free community edition or the professional edition.
    -- Seafile versions 6.x, 7.x, 8.x and 9.x are all supported.
    -- Encrypted libraries are also supported.
    -- It supports 2FA enabled users
    -- Using a Library API Token is **not** supported
    -
    -## Configuration
    -
    -There are two distinct modes you can setup your remote:
    -- you point your remote to the **root of the server**, meaning you don't specify a library during the configuration:
    -Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`.
    -- you point your remote to a specific library during the configuration:
    -Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_)
    -
    -### Configuration in root mode
    -
    -Here is an example of making a seafile configuration for a user with **no** two-factor authentication.  First run
    -
    -    rclone config
    -
    -This will guide you through an interactive setup process. To authenticate
    -you will need the URL of your server, your email (or username) and your password.
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile  "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ **

    -

    URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com  "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> false Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication is not enabled on this account. -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = *** ENCRYPTED *** 2fa = false -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this:
    -
    -See all libraries
    -
    -    rclone lsd seafile:
    -
    -Create a new library
    -
    -    rclone mkdir seafile:library
    -
    -List the contents of a library
    -
    -    rclone ls seafile:library
    -
    -Sync `/home/local/directory` to the remote library, deleting any
    -excess files in the library.
    -
    -    rclone sync --interactive /home/local/directory seafile:library
    -
    -### Configuration in library mode
    -
    -Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile  "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ **

    -

    URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com  "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> true Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> My Library Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication: please enter your 2FA code 2fa code> 123456 Authenticating... Success! -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = 2fa = true library = My Library -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.
    -
    -You specified `My Library` during the configuration. The root of the remote is pointing at the
    -root of the library `My Library`:
    -
    -See all files in the library:
    -
    -    rclone lsd seafile:
    -
    -Create a new directory inside the library
    -
    -    rclone mkdir seafile:directory
    -
    -List the contents of a directory
    -
    -    rclone ls seafile:directory
    -
    -Sync `/home/local/directory` to the remote library, deleting any
    -excess files in the library.
    -
    -    rclone sync --interactive /home/local/directory seafile:
    -
    -
    -### --fast-list
    -
    -Seafile version 7+ supports `--fast-list` which allows you to use fewer
    -transactions in exchange for more memory. See the [rclone
    -docs](https://rclone.org/docs/#fast-list) for more details.
    -Please note this is not supported on seafile server version 6.x
    -
    -
    -### Restricted filename characters
    -
    -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
    -the following characters are also replaced:
    -
    -| Character | Value | Replacement |
    -| --------- |:-----:|:-----------:|
    -| /         | 0x2F  | /          |
    -| "         | 0x22  | "          |
    -| \         | 0x5C  | \           |
    -
    -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
    -as they can't be used in JSON strings.
    -
    -### Seafile and rclone link
    -
    -Rclone supports generating share links for non-encrypted libraries only.
    -They can either be for a file or a directory:
    -
    -

    rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/

    -
    
    -or if run on a directory you will get:
    -
    -

    rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/

    -
    
    -Please note a share link is unique for each file or directory. If you run a link command on a file/dir
    -that has already been shared, you will get the exact same link.
    -
    -### Compatibility
    -
    -It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions:
    -- 6.3.4 community edition
    -- 7.0.5 community edition
    -- 7.1.3 community edition
    -- 9.0.10 community edition
    -
    -Versions below 6.0 are not supported.
    -Versions between 6.0 and 6.3 haven't been tested and might not work properly.
    -
    -Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server.
    -
    -
    -### Standard options
    -
    -Here are the Standard options specific to seafile (seafile).
    -
    -#### --seafile-url
    -
    -URL of seafile host to connect to.
    -
    -Properties:
    -
    -- Config:      url
    -- Env Var:     RCLONE_SEAFILE_URL
    -- Type:        string
    -- Required:    true
    -- Examples:
    -    - "https://cloud.seafile.com/"
    -        - Connect to cloud.seafile.com.
    -
    -#### --seafile-user
    -
    -User name (usually email address).
    -
    -Properties:
    -
    -- Config:      user
    -- Env Var:     RCLONE_SEAFILE_USER
    -- Type:        string
    -- Required:    true
    -
    -#### --seafile-pass
    -
    -Password.
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      pass
    -- Env Var:     RCLONE_SEAFILE_PASS
    -- Type:        string
    -- Required:    false
    -
    -#### --seafile-2fa
    -
    -Two-factor authentication ('true' if the account has 2FA enabled).
    -
    -Properties:
    -
    -- Config:      2fa
    -- Env Var:     RCLONE_SEAFILE_2FA
    -- Type:        bool
    -- Default:     false
    -
    -#### --seafile-library
    -
    -Name of the library.
    -
    -Leave blank to access all non-encrypted libraries.
    -
    -Properties:
    -
    -- Config:      library
    -- Env Var:     RCLONE_SEAFILE_LIBRARY
    -- Type:        string
    -- Required:    false
    -
    -#### --seafile-library-key
    -
    -Library password (for encrypted libraries only).
    -
    -Leave blank if you pass it through the command line.
    -
    -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
    -
    -Properties:
    -
    -- Config:      library_key
    -- Env Var:     RCLONE_SEAFILE_LIBRARY_KEY
    -- Type:        string
    -- Required:    false
    -
    -#### --seafile-auth-token
    -
    -Authentication token.
    -
    -Properties:
    -
    -- Config:      auth_token
    -- Env Var:     RCLONE_SEAFILE_AUTH_TOKEN
    -- Type:        string
    -- Required:    false
    -
    -### Advanced options
    -
    -Here are the Advanced options specific to seafile (seafile).
    -
    -#### --seafile-create-library
    -
    -Should rclone create a library if it doesn't exist.
    -
    -Properties:
    -
    -- Config:      create_library
    -- Env Var:     RCLONE_SEAFILE_CREATE_LIBRARY
    -- Type:        bool
    -- Default:     false
    -
    -#### --seafile-encoding
    -
    -The encoding for the backend.
    -
    -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
    -
    -Properties:
    -
    -- Config:      encoding
    -- Env Var:     RCLONE_SEAFILE_ENCODING
    -- Type:        Encoding
    -- Default:     Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
    -
    -
    -
    -#  SFTP
    -
    -SFTP is the [Secure (or SSH) File Transfer
    -Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
    -
    -The SFTP backend can be used with a number of different providers:
    -
    -
    -- Hetzner Storage Box
    -- rsync.net
    -
    -
    -SFTP runs over SSH v2 and is installed as standard with most modern
    -SSH installations.
    -
    -Paths are specified as `remote:path`. If the path does not begin with
    -a `/` it is relative to the home directory of the user.  An empty path
    -`remote:` refers to the user's home directory. For example, `rclone lsd remote:` 
    -would list the home directory of the user configured in the rclone remote config 
    -(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root 
    -directory for remote machine (i.e. `/`)
    -
    -Note that some SFTP servers will need the leading / - Synology is a
    -good example of this. rsync.net and Hetzner, on the other hand, requires users to
    -OMIT the leading /.
    -
    -Note that by default rclone will try to execute shell commands on
    -the server, see [shell access considerations](#shell-access-considerations).
    -
    -## Configuration
    -
    -Here is an example of making an SFTP configuration.  First run
    -
    -    rclone config
    -
    -This will guide you through an interactive setup process.
    -
    -

    No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / SSH/SFTP  "sftp" [snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com  "example.com" host> example.com SSH username Enter a string value. Press Enter for the default ("$USER"). user> sftpuser SSH port number Enter a signed integer. Press Enter for the default (22). port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. key_file> Remote config -------------------- [remote] host = example.com user = sftpuser port = pass = key_file = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y

    -
    
    -This remote is called `remote` and can now be used like this:
    -
    -See all directories in the home directory
    -
    -    rclone lsd remote:
    -
    -See all directories in the root directory
    -
    -    rclone lsd remote:/
    -
    -Make a new directory
    -
    -    rclone mkdir remote:path/to/directory
    -
    -List the contents of a directory
    -
    -    rclone ls remote:path/to/directory
    -
    -Sync `/home/local/directory` to the remote directory, deleting any
    -excess files in the directory.
    -
    -    rclone sync --interactive /home/local/directory remote:directory
    -
    -Mount the remote path `/srv/www-data/` to the local path
    -`/mnt/www-data`
    -
    -    rclone mount remote:/srv/www-data/ /mnt/www-data
    -
    -### SSH Authentication
    -
    -The SFTP remote supports three authentication methods:
    -
    -  * Password
    -  * Key file, including certificate signed keys
    -  * ssh-agent
    -
    -Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`.
    -Only unencrypted OpenSSH or PEM encrypted files are supported.
    -
    -The key file can be specified in either an external file (key_file) or contained within the 
    -rclone config file (key_pem).  If using key_pem in the config file, the entry should be on a
    -single line with new line ('\n' or '\r\n') separating lines.  i.e.
    -
    -    key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
    -
    -This will generate it correctly for key_pem for use in the config:
    -
    -    awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
    -
    -If you don't specify `pass`, `key_file`, or `key_pem` or `ask_password` then
    -rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent`
    -to force the usage of an ssh-agent. In this case `key_file` or `key_pem` can
    -also be specified to force the usage of a specific key in the ssh-agent.
    -
    -Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.
    -
    -If you set the `ask_password` option, rclone will prompt for a password when
    -needed and no password has been configured.
    -
    -#### Certificate-signed keys
    -
    -With traditional key-based authentication, you configure your private key only,
    -and the public key built into it will be used during the authentication process.
    -
    -If you have a certificate you may use it to sign your public key, creating a
    -separate SSH user certificate that should be used instead of the plain public key
    -extracted from the private key. Then you must provide the path to the
    -user certificate public key file in `pubkey_file`.
    -
    -Note: This is not the traditional public key paired with your private key,
    -typically saved as `/home/$USER/.ssh/id_rsa.pub`. Setting this path in
    -`pubkey_file` will not work.
    -
    -Example:
    -
    -

    [remote] type = sftp host = example.com user = sftpuser key_file = ~/id_rsa pubkey_file = ~/id_rsa-cert.pub

    -
    
    -If you concatenate a cert with a private key then you can specify the
    -merged file in both places.
    -
    -Note: the cert must come first in the file.  e.g.
    -
    -```
    -cat id_rsa-cert.pub id_rsa > merged_key
    -```
    -
    -### Host key validation
    -
    -By default rclone will not check the server's host key for validation.  This
    -can allow an attacker to replace a server with their own and if you use
    -password authentication then this can lead to that password being exposed.
    -
    -Host key matching, using standard `known_hosts` files can be turned on by
    -enabling the `known_hosts_file` option.  This can point to the file maintained
    -by `OpenSSH` or can point to a unique file.
    -
    -e.g. using the OpenSSH `known_hosts` file:
    -
    -```
    +domain> 
    +Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
    +tenant> 
    +Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
    +tenant_id> 
    +Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
    +tenant_domain> 
    +Region name - optional (OS_REGION_NAME)
    +region> 
    +Storage URL - optional (OS_STORAGE_URL)
    +storage_url> 
    +Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
    +auth_token> 
    +AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
    +auth_version> 
    +Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
    +Choose a number from below, or type in your own value
    + 1 / Public (default, choose this if not sure)
    +   \ "public"
    + 2 / Internal (use internal service net)
    +   \ "internal"
    + 3 / Admin
    +   \ "admin"
    +endpoint_type> 
    +Remote config
    +--------------------
    +[test]
    +env_auth = true
    +user = 
    +key = 
    +auth = 
    +user_id = 
    +domain = 
    +tenant = 
    +tenant_id = 
    +tenant_domain = 
    +region = 
    +storage_url = 
    +auth_token = 
    +auth_version = 
    +endpoint_type = 
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called remote and can now be used like this

    +

    See all containers

    +
    rclone lsd remote:
    +

    Make a new container

    +
    rclone mkdir remote:container
    +

    List the contents of a container

    +
    rclone ls remote:container
    +

    Sync /home/local/directory to the remote container, deleting any excess files in the container.

    +
    rclone sync --interactive /home/local/directory remote:container
    +

    Configuration from an OpenStack credentials file

    +

    An OpenStack credentials file typically looks something something like this (without the comments)

    +
    export OS_AUTH_URL=https://a.provider.net/v2.0
    +export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
    +export OS_TENANT_NAME="1234567890123456"
    +export OS_USERNAME="123abc567xy"
    +echo "Please enter your OpenStack Password: "
    +read -sr OS_PASSWORD_INPUT
    +export OS_PASSWORD=$OS_PASSWORD_INPUT
    +export OS_REGION_NAME="SBG1"
    +if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
    +

    The config file needs to look something like this where $OS_USERNAME represents the value of the OS_USERNAME variable - 123abc567xy in the example above.

    +
    [remote]
    +type = swift
    +user = $OS_USERNAME
    +key = $OS_PASSWORD
    +auth = $OS_AUTH_URL
    +tenant = $OS_TENANT_NAME
    +

    Note that you may (or may not) need to set region too - try without first.

    +

    Configuration from the environment

    +

    If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables.

    +

    When you run through the config, make sure you choose true for env_auth and leave everything else blank.

    +

    rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library.

    +

    Using an alternate authentication method

    +

    If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the openstack commands to get a token). Then, you just need to pass the two configuration variables auth_token and storage_url. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.

    +

    Using rclone without a config file

    +

    You can use rclone with swift without a config file, if desired, like this:

    +
    source openstack-credentials-file
    +export RCLONE_CONFIG_MYREMOTE_TYPE=swift
    +export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
    +rclone lsd myremote:
    +

    --fast-list

    +

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    +

    --update and --use-server-modtime

    +

    As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

    +

    For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

    +

    Modification times and hashes

    +

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    +

    This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    +

    The MD5 hash algorithm is supported.

    +

    Restricted filename characters

    + + + + + + + + + + + + + + + + + + + + +
    CharacterValueReplacement
    NUL0x00
    /0x2F
    +

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    +

    Standard options

    +

    Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).

    +

    --swift-env-auth

    +

    Get swift credentials from environment variables in standard OpenStack form.

    +

    Properties:

    + +

    --swift-user

    +

    User name to log in (OS_USERNAME).

    +

    Properties:

    + +

    --swift-key

    +

    API key or password (OS_PASSWORD).

    +

    Properties:

    + +

    --swift-auth

    +

    Authentication URL for server (OS_AUTH_URL).

    +

    Properties:

    + +

    --swift-user-id

    +

    User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).

    +

    Properties:

    + +

    --swift-domain

    +

    User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)

    +

    Properties:

    + +

    --swift-tenant

    +

    Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).

    +

    Properties:

    + +

    --swift-tenant-id

    +

    Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).

    +

    Properties:

    + +

    --swift-tenant-domain

    +

    Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).

    +

    Properties:

    + +

    --swift-region

    +

    Region name - optional (OS_REGION_NAME).

    +

    Properties:

    + +

    --swift-storage-url

    +

    Storage URL - optional (OS_STORAGE_URL).

    +

    Properties:

    + +

    --swift-auth-token

    +

    Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).

    +

    Properties:

    + +

    --swift-application-credential-id

    +

    Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).

    +

    Properties:

    + +

    --swift-application-credential-name

    +

    Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).

    +

    Properties:

    + +

    --swift-application-credential-secret

    +

    Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).

    +

    Properties:

    + +

    --swift-auth-version

    +

    AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).

    +

    Properties:

    + +

    --swift-endpoint-type

    +

    Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).

    +

    Properties:

    + +

    --swift-storage-policy

    +

    The storage policy to use when creating a new container.

    +

    This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).

    +

    --swift-leave-parts-on-error

    +

    If true avoid calling abort upload on a failure.

    +

    It should be set to true for resuming uploads across different sessions.

    +

    Properties:

    + +

    --swift-chunk-size

    +

    Above this size files will be chunked into a _segments container.

    +

    Above this size files will be chunked into a _segments container. The default for this is 5 GiB which is its maximum value.

    +

    Properties:

    + +

    --swift-no-chunk

    +

    Don't chunk files during streaming upload.

    +

    When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.

    +

    This will limit the maximum upload size to 5 GiB. However non chunked files are easier to deal with and have an MD5SUM.

    +

    Rclone will still chunk files bigger than chunk_size when doing normal copy operations.

    +

    Properties:

    + +

    --swift-no-large-objects

    +

    Disable support for static and dynamic large objects

    +

    Swift cannot transparently store files bigger than 5 GiB. There are two schemes for doing that, static or dynamic large objects, and the API does not allow rclone to determine whether a file is a static or dynamic large object without doing a HEAD on the object. Since these need to be treated differently, this means rclone has to issue HEAD requests for objects for example when reading checksums.

    +

    When no_large_objects is set, rclone will assume that there are no static or dynamic large objects stored. This means it can stop doing the extra HEAD calls which in turn increases performance greatly especially when doing a swift to swift transfer with --checksum set.

    +

    Setting this option implies no_chunk and also that no files will be uploaded in chunks, so files bigger than 5 GiB will just fail on upload.

    +

    If you set this option and there are static or dynamic large objects, then this will give incorrect hashes for them. Downloads will succeed, but other operations such as Remove and Copy will fail.

    +

    Properties:

    + +

    --swift-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --swift-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    +

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    +

    Troubleshooting

    +

    Rclone gives Failed to create file system for "remote:": Bad Request

    +

    Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.

    +

    So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.

    +

    This may also be caused by specifying the region when you shouldn't have (e.g. OVH).

    +

    Rclone gives Failed to create file system: Response didn't have storage url and auth token

    +

    This is most likely caused by forgetting to specify your tenant when setting up a swift remote.

    +

    OVH Cloud Archive

    +

    To use rclone with OVH cloud archive, first use rclone config to set up a swift backend with OVH, choosing pca as the storage_policy.

    +

    Uploading Objects

    +

    Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.

    +

    Retrieving Objects

    +

    To retrieve objects use rclone copy as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:

    +

    2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)

    +

    Rclone will wait for the time specified then retry the copy.

    +

    pCloud

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    +

    The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Pcloud
    +   \ "pcloud"
    +[snip]
    +Storage> pcloud
    +Pcloud App Client Id - leave blank normally.
    +client_id> 
    +Pcloud App Client Secret - leave blank normally.
    +client_secret> 
    +Remote config
    +Use web browser to automatically authenticate rclone with remote?
    + * Say Y if the machine running rclone has a web browser you can use
    + * Say N if running rclone on a (remote) machine without web browser access
    +If not sure try Y. If Y failed, try N.
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
     [remote]
    +client_id = 
    +client_secret = 
    +token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your pCloud

    +
    rclone lsd remote:
    +

    List all the files in your pCloud

    +
    rclone ls remote:
    +

    To copy a local directory to a pCloud directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Modification times and hashes

    +

    pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.

    +

    pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you can use the --checksum flag.

    +

    Restricted filename characters

    +

    In addition to the default restricted characters set the following characters are also replaced:

    + + + + + + + + + + + + + + + +
    CharacterValueReplacement
    \0x5C
    +

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    +

    Deleting files

    +

    Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash.

    +

    Emptying the trash

    +

    Due to an API limitation, the rclone cleanup command will only work if you set your username and password in the advanced options for this backend. Since we generally want to avoid storing user passwords in the rclone config file, we advise you to only set this up if you need the rclone cleanup command to work.

    +

    Root folder ID

    +

    You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your pCloud drive.

    +

    Normally you will leave this blank and rclone will determine the correct root to use itself.

    +

    However you can set this to restrict rclone to a specific folder hierarchy.

    +

    In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.

    +

    So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.

    +

    Standard options

    +

    Here are the Standard options specific to pcloud (Pcloud).

    +

    --pcloud-client-id

    +

    OAuth Client Id.

    +

    Leave blank normally.

    +

    Properties:

    + +

    --pcloud-client-secret

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to pcloud (Pcloud).

    +

    --pcloud-token

    +

    OAuth Access Token as a JSON blob.

    +

    Properties:

    + +

    --pcloud-auth-url

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --pcloud-token-url

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --pcloud-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --pcloud-root-folder-id

    +

    Fill in for rclone to use a non root folder as its starting point.

    +

    Properties:

    + +

    --pcloud-hostname

    +

    Hostname to connect to.

    +

    This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize.

    +

    Properties:

    + +

    --pcloud-username

    +

    Your pcloud username.

    +

    This is only required when you want to use the cleanup command. Due to a bug in the pcloud API the required API does not support OAuth authentication so we have to rely on user password authentication for it.

    +

    Properties:

    + +

    --pcloud-password

    +

    Your pcloud password.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    --pcloud-description

    +

    Description of the remote

    +

    Properties:

    + +

    PikPak

    +

    PikPak is a private cloud drive.

    +

    Paths are specified as remote:path, and may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    +

    Here is an example of making a remote for PikPak.

    +

    First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    +Enter name for new remote.
    +name> remote
    +
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +XX / PikPak
    +   \ (pikpak)
    +Storage> XX
    +
    +Option user.
    +Pikpak username.
    +Enter a value.
    +user> USERNAME
    +
    +Option pass.
    +Pikpak password.
    +Choose an alternative below.
    +y) Yes, type in my own password
    +g) Generate random password
    +y/g> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> 
    +
    +Configuration complete.
    +Options:
    +- type: pikpak
    +- user: USERNAME
    +- pass: *** ENCRYPTED ***
    +- token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"}
    +Keep this "remote" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Modification times and hashes

    +

    PikPak keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time

    +

    The MD5 hash algorithm is supported.

    +

    Standard options

    +

    Here are the Standard options specific to pikpak (PikPak).

    +

    --pikpak-user

    +

    Pikpak username.

    +

    Properties:

    + +

    --pikpak-pass

    +

    Pikpak password.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to pikpak (PikPak).

    +

    --pikpak-client-id

    +

    OAuth Client Id.

    +

    Leave blank normally.

    +

    Properties:

    + +

    --pikpak-client-secret

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    +

    Properties:

    + +

    --pikpak-token

    +

    OAuth Access Token as a JSON blob.

    +

    Properties:

    + +

    --pikpak-auth-url

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --pikpak-token-url

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --pikpak-root-folder-id

    +

    ID of the root folder. Leave blank normally.

    +

    Fill in for rclone to use a non root folder as its starting point.

    +

    Properties:

    + +

    --pikpak-use-trash

    +

    Send files to the trash instead of deleting permanently.

    +

    Defaults to true, namely sending files to the trash. Use --pikpak-use-trash=false to delete files permanently instead.

    +

    Properties:

    + +

    --pikpak-trashed-only

    +

    Only show files that are in the trash.

    +

    This will show trashed files in their original directory structure.

    +

    Properties:

    + +

    --pikpak-hash-memory-limit

    +

    Files bigger than this will be cached on disk to calculate hash if required.

    +

    Properties:

    + +

    --pikpak-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --pikpak-description

    +

    Description of the remote

    +

    Properties:

    + +

    Backend commands

    +

    Here are the commands specific to the pikpak backend.

    +

    Run them with

    +
    rclone backend COMMAND remote:
    +

    The help below will explain what arguments each command takes.

    +

    See the backend command for more info on how to pass options and arguments.

    +

    These can be run on a running backend using the rc command backend/command.

    +

    addurl

    +

    Add offline download task for url

    +
    rclone backend addurl remote: [options] [<arguments>+]
    +

    This command adds offline download task for url.

    +

    Usage:

    +
    rclone backend addurl pikpak:dirpath url
    +

    Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download will fallback to default 'My Pack' folder.

    +

    decompress

    +

    Request decompress of a file/files in a folder

    +
    rclone backend decompress remote: [options] [<arguments>+]
    +

    This command requests decompress of file/files in a folder.

    +

    Usage:

    +
    rclone backend decompress pikpak:dirpath {filename} -o password=password
    +rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
    +

    An optional argument 'filename' can be specified for a file located in 'pikpak:dirpath'. You may want to pass '-o password=password' for a password-protected files. Also, pass '-o delete-src-file' to delete source files after decompression finished.

    +

    Result:

    +
    {
    +    "Decompressed": 17,
    +    "SourceDeleted": 0,
    +    "Errors": 0
    +}
    +

    Limitations

    +

    Hashes may be empty

    +

    PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.

    +

    Deleted files still visible with trashed-only

    +

    Deleted files will still be visible with --pikpak-trashed-only even after the trash emptied. This goes away after few days.

    +

    premiumize.me

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    +

    The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / premiumize.me
    +   \ "premiumizeme"
    +[snip]
    +Storage> premiumizeme
    +** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **
    +
    +Remote config
    +Use web browser to automatically authenticate rclone with remote?
    + * Say Y if the machine running rclone has a web browser you can use
    + * Say N if running rclone on a (remote) machine without web browser access
    +If not sure try Y. If Y failed, try N.
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[remote]
    +type = premiumizeme
    +token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> 
    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your premiumize.me

    +
    rclone lsd remote:
    +

    List all the files in your premiumize.me

    +
    rclone ls remote:
    +

    To copy a local directory to an premiumize.me directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Modification times and hashes

    +

    premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work.

    +

    Restricted filename characters

    +

    In addition to the default restricted characters set the following characters are also replaced:

    + + + + + + + + + + + + + + + + + + + + +
    CharacterValueReplacement
    \0x5C
    "0x22
    +

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    +

    Standard options

    +

    Here are the Standard options specific to premiumizeme (premiumize.me).

    +

    --premiumizeme-client-id

    +

    OAuth Client Id.

    +

    Leave blank normally.

    +

    Properties:

    + +

    --premiumizeme-client-secret

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    +

    Properties:

    + +

    --premiumizeme-api-key

    +

    API Key.

    +

    This is not normally used - use oauth instead.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to premiumizeme (premiumize.me).

    +

    --premiumizeme-token

    +

    OAuth Access Token as a JSON blob.

    +

    Properties:

    + +

    --premiumizeme-auth-url

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --premiumizeme-token-url

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --premiumizeme-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --premiumizeme-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    +

    Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    +

    premiumize.me file names can't have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents and

    +

    premiumize.me only supports filenames up to 255 characters in length.

    +

    Proton Drive

    +

    Proton Drive is an end-to-end encrypted Swiss vault for your files that protects your data.

    +

    This is an rclone backend for Proton Drive which supports the file transfer features of Proton Drive using the same client-side encryption.

    +

    Due to the fact that Proton Drive doesn't publish its API documentation, this backend is implemented with best efforts by reading the open-sourced client source code and observing the Proton Drive traffic in the browser.

    +

    NB This backend is currently in Beta. It is believed to be correct and all the integration tests pass. However the Proton Drive protocol has evolved over time there may be accounts it is not compatible with. Please post on the rclone forum if you find an incompatibility.

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configurations

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Proton Drive
    +   \ "Proton Drive"
    +[snip]
    +Storage> protondrive
    +User name
    +user> you@protonmail.com
    +Password.
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank
    +y/g/n> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Option 2fa.
    +2FA code (if the account requires one)
    +Enter a value. Press Enter to leave empty.
    +2fa> 123456
    +Remote config
    +--------------------
    +[remote]
    +type = protondrive
    +user = you@protonmail.com
    +pass = *** ENCRYPTED ***
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    NOTE: The Proton Drive encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.

    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your Proton Drive

    +
    rclone lsd remote:
    +

    List all the files in your Proton Drive

    +
    rclone ls remote:
    +

    To copy a local directory to an Proton Drive directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Modification times and hashes

    +

    Proton Drive Bridge does not support updating modification times yet.

    +

    The SHA1 hash algorithm is supported.

    +

    Restricted filename characters

    +

    Invalid UTF-8 bytes will be replaced, also left and right spaces will be removed (code reference)

    +

    Duplicated files

    +

    Proton Drive can not have two files with exactly the same name and path. If the conflict occurs, depending on the advanced config, the file might or might not be overwritten.

    +

    Mailbox password

    +

    Please set your mailbox password in the advanced config section.

    +

    Caching

    +

    The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.

    +

    Standard options

    +

    Here are the Standard options specific to protondrive (Proton Drive).

    +

    --protondrive-username

    +

    The username of your proton account

    +

    Properties:

    + +

    --protondrive-password

    +

    The password of your proton account.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    --protondrive-2fa

    +

    The 2FA code

    +

    The value can also be provided with --protondrive-2fa=000000

    +

    The 2FA code of your proton drive account if the account is set up with two-factor authentication

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to protondrive (Proton Drive).

    +

    --protondrive-mailbox-password

    +

    The mailbox password of your two-password proton account.

    +

    For more information regarding the mailbox password, please check the following official knowledge base article: https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    --protondrive-client-uid

    +

    Client uid key (internal use only)

    +

    Properties:

    + +

    --protondrive-client-access-token

    +

    Client access token key (internal use only)

    +

    Properties:

    + +

    --protondrive-client-refresh-token

    +

    Client refresh token key (internal use only)

    +

    Properties:

    + +

    --protondrive-client-salted-key-pass

    +

    Client salted key pass key (internal use only)

    +

    Properties:

    + +

    --protondrive-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --protondrive-original-file-size

    +

    Return the file size before encryption

    +

    The size of the encrypted file will be different from (bigger than) the original file size. Unless there is a reason to return the file size after encryption is performed, otherwise, set this option to true, as features like Open() which will need to be supplied with original content size, will fail to operate properly

    +

    Properties:

    + +

    --protondrive-app-version

    +

    The app version string

    +

    The app version string indicates the client that is currently performing the API request. This information is required and will be sent with every API request.

    +

    Properties:

    + +

    --protondrive-replace-existing-draft

    +

    Create a new revision when filename conflict is detected

    +

    When a file upload is cancelled or failed before completion, a draft will be created and the subsequent upload of the same file to the same location will be reported as a conflict.

    +

    The value can also be set by --protondrive-replace-existing-draft=true

    +

    If the option is set to true, the draft will be replaced and then the upload operation will restart. If there are other clients also uploading at the same file location at the same time, the behavior is currently unknown. Need to set to true for integration tests. If the option is set to false, an error "a draft exist - usually this means a file is being uploaded at another client, or, there was a failed upload attempt" will be returned, and no upload will happen.

    +

    Properties:

    + +

    --protondrive-enable-caching

    +

    Caches the files and folders metadata to reduce API calls

    +

    Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, as the current implementation doesn't update or clear the cache when there are external changes.

    +

    The files and folders on ProtonDrive are represented as links with keyrings, which can be cached to improve performance and be friendly to the API server.

    +

    The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.

    +

    Properties:

    + +

    --protondrive-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    +

    This backend uses the Proton-API-Bridge, which is based on go-proton-api, a fork of the official repo.

    +

    There is no official API documentation available from Proton Drive. But, thanks to Proton open sourcing proton-go-api and the web, iOS, and Android client codebases, we don't need to completely reverse engineer the APIs by observing the web client traffic!

    +

    proton-go-api provides the basic building blocks of API calls and error handling, such as 429 exponential back-off, but it is pretty much just a barebone interface to the Proton API. For example, the encryption and decryption of the Proton Drive file are not provided in this library.

    +

    The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on top of this quickly. This codebase handles the intricate tasks before and after calling Proton APIs, particularly the complex encryption scheme, allowing developers to implement features for other software on top of this codebase. There are likely quite a few errors in this library, as there isn't official documentation available.

    +

    put.io

    +

    Paths are specified as remote:path

    +

    put.io paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    +

    The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> putio
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Put.io
    +   \ "putio"
    +[snip]
    +Storage> putio
    +** See help for putio backend at: https://rclone.org/putio/ **
    +
    +Remote config
    +Use web browser to automatically authenticate rclone with remote?
    + * Say Y if the machine running rclone has a web browser you can use
    + * Say N if running rclone on a (remote) machine without web browser access
    +If not sure try Y. If Y failed, try N.
    +y) Yes
    +n) No
    +y/n> y
    +If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    +Log in and authorize rclone for access
    +Waiting for code...
    +Got code
    +--------------------
    +[putio]
    +type = putio
    +token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"}
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +Current remotes:
    +
    +Name                 Type
    +====                 ====
    +putio                putio
    +
    +e) Edit existing remote
    +n) New remote
    +d) Delete remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +e/n/d/r/c/s/q> q
    +

    See the remote setup docs for how to set it up on a machine with no Internet browser available.

    +

    Note that rclone runs a webserver on your local machine to collect the token as returned from put.io if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.

    +

    You can then use it like this,

    +

    List directories in top level of your put.io

    +
    rclone lsd remote:
    +

    List all the files in your put.io

    +
    rclone ls remote:
    +

    To copy a local directory to a put.io directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Restricted filename characters

    +

    In addition to the default restricted characters set the following characters are also replaced:

    + + + + + + + + + + + + + + + +
    CharacterValueReplacement
    \0x5C
    +

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    +

    Standard options

    +

    Here are the Standard options specific to putio (Put.io).

    +

    --putio-client-id

    +

    OAuth Client Id.

    +

    Leave blank normally.

    +

    Properties:

    + +

    --putio-client-secret

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to putio (Put.io).

    +

    --putio-token

    +

    OAuth Access Token as a JSON blob.

    +

    Properties:

    + +

    --putio-auth-url

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --putio-token-url

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    +

    Properties:

    + +

    --putio-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --putio-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    +

    put.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.

    +

    If you want to avoid ever hitting these limits, you may use the --tpslimit flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.

    +

    Proton Drive

    +

    Proton Drive is an end-to-end encrypted Swiss vault for your files that protects your data.

    +

    This is an rclone backend for Proton Drive which supports the file transfer features of Proton Drive using the same client-side encryption.

    +

    Due to the fact that Proton Drive doesn't publish its API documentation, this backend is implemented with best efforts by reading the open-sourced client source code and observing the Proton Drive traffic in the browser.

    +

    NB This backend is currently in Beta. It is believed to be correct and all the integration tests pass. However the Proton Drive protocol has evolved over time there may be accounts it is not compatible with. Please post on the rclone forum if you find an incompatibility.

    +

    Paths are specified as remote:path

    +

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configurations

    +

    Here is an example of how to make a remote called remote. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Proton Drive
    +   \ "Proton Drive"
    +[snip]
    +Storage> protondrive
    +User name
    +user> you@protonmail.com
    +Password.
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank
    +y/g/n> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Option 2fa.
    +2FA code (if the account requires one)
    +Enter a value. Press Enter to leave empty.
    +2fa> 123456
    +Remote config
    +--------------------
    +[remote]
    +type = protondrive
    +user = you@protonmail.com
    +pass = *** ENCRYPTED ***
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    NOTE: The Proton Drive encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.

    +

    Once configured you can then use rclone like this,

    +

    List directories in top level of your Proton Drive

    +
    rclone lsd remote:
    +

    List all the files in your Proton Drive

    +
    rclone ls remote:
    +

    To copy a local directory to an Proton Drive directory called backup

    +
    rclone copy /home/source remote:backup
    +

    Modification times and hashes

    +

    Proton Drive Bridge does not support updating modification times yet.

    +

    The SHA1 hash algorithm is supported.

    +

    Restricted filename characters

    +

    Invalid UTF-8 bytes will be replaced, also left and right spaces will be removed (code reference)

    +

    Duplicated files

    +

    Proton Drive can not have two files with exactly the same name and path. If the conflict occurs, depending on the advanced config, the file might or might not be overwritten.

    +

    Mailbox password

    +

    Please set your mailbox password in the advanced config section.

    +

    Caching

    +

    The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.

    +

    Standard options

    +

    Here are the Standard options specific to protondrive (Proton Drive).

    +

    --protondrive-username

    +

    The username of your proton account

    +

    Properties:

    + +

    --protondrive-password

    +

    The password of your proton account.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    --protondrive-2fa

    +

    The 2FA code

    +

    The value can also be provided with --protondrive-2fa=000000

    +

    The 2FA code of your proton drive account if the account is set up with two-factor authentication

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to protondrive (Proton Drive).

    +

    --protondrive-mailbox-password

    +

    The mailbox password of your two-password proton account.

    +

    For more information regarding the mailbox password, please check the following official knowledge base article: https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    --protondrive-client-uid

    +

    Client uid key (internal use only)

    +

    Properties:

    + +

    --protondrive-client-access-token

    +

    Client access token key (internal use only)

    +

    Properties:

    + +

    --protondrive-client-refresh-token

    +

    Client refresh token key (internal use only)

    +

    Properties:

    + +

    --protondrive-client-salted-key-pass

    +

    Client salted key pass key (internal use only)

    +

    Properties:

    + +

    --protondrive-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --protondrive-original-file-size

    +

    Return the file size before encryption

    +

    The size of the encrypted file will be different from (bigger than) the original file size. Unless there is a reason to return the file size after encryption is performed, otherwise, set this option to true, as features like Open() which will need to be supplied with original content size, will fail to operate properly

    +

    Properties:

    + +

    --protondrive-app-version

    +

    The app version string

    +

    The app version string indicates the client that is currently performing the API request. This information is required and will be sent with every API request.

    +

    Properties:

    + +

    --protondrive-replace-existing-draft

    +

    Create a new revision when filename conflict is detected

    +

    When a file upload is cancelled or failed before completion, a draft will be created and the subsequent upload of the same file to the same location will be reported as a conflict.

    +

    The value can also be set by --protondrive-replace-existing-draft=true

    +

    If the option is set to true, the draft will be replaced and then the upload operation will restart. If there are other clients also uploading at the same file location at the same time, the behavior is currently unknown. Need to set to true for integration tests. If the option is set to false, an error "a draft exist - usually this means a file is being uploaded at another client, or, there was a failed upload attempt" will be returned, and no upload will happen.

    +

    Properties:

    + +

    --protondrive-enable-caching

    +

    Caches the files and folders metadata to reduce API calls

    +

    Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, as the current implementation doesn't update or clear the cache when there are external changes.

    +

    The files and folders on ProtonDrive are represented as links with keyrings, which can be cached to improve performance and be friendly to the API server.

    +

    The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.

    +

    Properties:

    + +

    --protondrive-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    +

    This backend uses the Proton-API-Bridge, which is based on go-proton-api, a fork of the official repo.

    +

    There is no official API documentation available from Proton Drive. But, thanks to Proton open sourcing proton-go-api and the web, iOS, and Android client codebases, we don't need to completely reverse engineer the APIs by observing the web client traffic!

    +

    proton-go-api provides the basic building blocks of API calls and error handling, such as 429 exponential back-off, but it is pretty much just a barebone interface to the Proton API. For example, the encryption and decryption of the Proton Drive file are not provided in this library.

    +

    The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on top of this quickly. This codebase handles the intricate tasks before and after calling Proton APIs, particularly the complex encryption scheme, allowing developers to implement features for other software on top of this codebase. There are likely quite a few errors in this library, as there isn't official documentation available.

    +

    Seafile

    +

    This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported

    +

    Configuration

    +

    There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)

    +

    Configuration in root mode

    +

    Here is an example of making a seafile configuration for a user with no two-factor authentication. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password.

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> seafile
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Seafile
    +   \ "seafile"
    +[snip]
    +Storage> seafile
    +** See help for seafile backend at: https://rclone.org/seafile/ **
    +
    +URL of seafile host to connect to
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    + 1 / Connect to cloud.seafile.com
    +   \ "https://cloud.seafile.com/"
    +url> http://my.seafile.server/
    +User name (usually email address)
    +Enter a string value. Press Enter for the default ("").
    +user> me@example.com
    +Password
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank (default)
    +y/g> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Two-factor authentication ('true' if the account has 2FA enabled)
    +Enter a boolean value (true or false). Press Enter for the default ("false").
    +2fa> false
    +Name of the library. Leave blank to access all non-encrypted libraries.
    +Enter a string value. Press Enter for the default ("").
    +library>
    +Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank (default)
    +y/g/n> n
    +Edit advanced config? (y/n)
    +y) Yes
    +n) No (default)
    +y/n> n
    +Remote config
    +Two-factor authentication is not enabled on this account.
    +--------------------
    +[seafile]
    +type = seafile
    +url = http://my.seafile.server/
    +user = me@example.com
    +pass = *** ENCRYPTED ***
    +2fa = false
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called seafile. It's pointing to the root of your seafile server and can now be used like this:

    +

    See all libraries

    +
    rclone lsd seafile:
    +

    Create a new library

    +
    rclone mkdir seafile:library
    +

    List the contents of a library

    +
    rclone ls seafile:library
    +

    Sync /home/local/directory to the remote library, deleting any excess files in the library.

    +
    rclone sync --interactive /home/local/directory seafile:library
    +

    Configuration in library mode

    +

    Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> seafile
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Seafile
    +   \ "seafile"
    +[snip]
    +Storage> seafile
    +** See help for seafile backend at: https://rclone.org/seafile/ **
    +
    +URL of seafile host to connect to
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    + 1 / Connect to cloud.seafile.com
    +   \ "https://cloud.seafile.com/"
    +url> http://my.seafile.server/
    +User name (usually email address)
    +Enter a string value. Press Enter for the default ("").
    +user> me@example.com
    +Password
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank (default)
    +y/g> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Two-factor authentication ('true' if the account has 2FA enabled)
    +Enter a boolean value (true or false). Press Enter for the default ("false").
    +2fa> true
    +Name of the library. Leave blank to access all non-encrypted libraries.
    +Enter a string value. Press Enter for the default ("").
    +library> My Library
    +Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank (default)
    +y/g/n> n
    +Edit advanced config? (y/n)
    +y) Yes
    +n) No (default)
    +y/n> n
    +Remote config
    +Two-factor authentication: please enter your 2FA code
    +2fa code> 123456
    +Authenticating...
    +Success!
    +--------------------
    +[seafile]
    +type = seafile
    +url = http://my.seafile.server/
    +user = me@example.com
    +pass = 
    +2fa = true
    +library = My Library
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.

    +

    You specified My Library during the configuration. The root of the remote is pointing at the root of the library My Library:

    +

    See all files in the library:

    +
    rclone lsd seafile:
    +

    Create a new directory inside the library

    +
    rclone mkdir seafile:directory
    +

    List the contents of a directory

    +
    rclone ls seafile:directory
    +

    Sync /home/local/directory to the remote library, deleting any excess files in the library.

    +
    rclone sync --interactive /home/local/directory seafile:
    +

    --fast-list

    +

    Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x

    +

    Restricted filename characters

    +

    In addition to the default restricted characters set the following characters are also replaced:

    + + + + + + + + + + + + + + + + + + + + + + + + + +
    CharacterValueReplacement
    /0x2F
    "0x22
    \0x5C
    +

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    + +

    Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory:

    +
    rclone link seafile:seafile-tutorial.doc
    +http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/
    +
    +

    or if run on a directory you will get:

    +
    rclone link seafile:dir
    +http://my.seafile.server/d/9ea2455f6f55478bbb0d/
    +

    Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link.

    +

    Compatibility

    +

    It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition

    +

    Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.

    +

    Each new version of rclone is automatically tested against the latest docker image of the seafile community server.

    +

    Standard options

    +

    Here are the Standard options specific to seafile (seafile).

    +

    --seafile-url

    +

    URL of seafile host to connect to.

    +

    Properties:

    + +

    --seafile-user

    +

    User name (usually email address).

    +

    Properties:

    + +

    --seafile-pass

    +

    Password.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    --seafile-2fa

    +

    Two-factor authentication ('true' if the account has 2FA enabled).

    +

    Properties:

    + +

    --seafile-library

    +

    Name of the library.

    +

    Leave blank to access all non-encrypted libraries.

    +

    Properties:

    + +

    --seafile-library-key

    +

    Library password (for encrypted libraries only).

    +

    Leave blank if you pass it through the command line.

    +

    NB Input to this must be obscured - see rclone obscure.

    +

    Properties:

    + +

    --seafile-auth-token

    +

    Authentication token.

    +

    Properties:

    + +

    Advanced options

    +

    Here are the Advanced options specific to seafile (seafile).

    +

    --seafile-create-library

    +

    Should rclone create a library if it doesn't exist.

    +

    Properties:

    + +

    --seafile-encoding

    +

    The encoding for the backend.

    +

    See the encoding section in the overview for more info.

    +

    Properties:

    + +

    --seafile-description

    +

    Description of the remote

    +

    Properties:

    + +

    SFTP

    +

    SFTP is the Secure (or SSH) File Transfer Protocol.

    +

    The SFTP backend can be used with a number of different providers:

    + +

    SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

    +

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. For example, rclone lsd remote: would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)

    +

    Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.

    +

    Note that by default rclone will try to execute shell commands on the server, see shell access considerations.

    +

    Configuration

    +

    Here is an example of making an SFTP configuration. First run

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / SSH/SFTP
    +   \ "sftp"
    +[snip]
    +Storage> sftp
    +SSH host to connect to
    +Choose a number from below, or type in your own value
    + 1 / Connect to example.com
    +   \ "example.com"
    +host> example.com
    +SSH username
    +Enter a string value. Press Enter for the default ("$USER").
    +user> sftpuser
    +SSH port number
    +Enter a signed integer. Press Enter for the default (22).
    +port>
    +SSH password, leave blank to use ssh-agent.
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank
    +y/g/n> n
    +Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
    +key_file>
    +Remote config
    +--------------------
    +[remote]
    +host = example.com
    +user = sftpuser
    +port =
    +pass =
    +key_file =
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This remote is called remote and can now be used like this:

    +

    See all directories in the home directory

    +
    rclone lsd remote:
    +

    See all directories in the root directory

    +
    rclone lsd remote:/
    +

    Make a new directory

    +
    rclone mkdir remote:path/to/directory
    +

    List the contents of a directory

    +
    rclone ls remote:path/to/directory
    +

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    +
    rclone sync --interactive /home/local/directory remote:directory
    +

    Mount the remote path /srv/www-data/ to the local path /mnt/www-data

    +
    rclone mount remote:/srv/www-data/ /mnt/www-data
    +

    SSH Authentication

    +

    The SFTP remote supports three authentication methods:

    + +

    Key files should be PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files are supported.

    +

    The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' or '') separating lines. i.e.

    +
    key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
    +

    This will generate it correctly for key_pem for use in the config:

    +
    awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
    +

    If you don't specify pass, key_file, or key_pem or ask_password then rclone will attempt to contact an ssh-agent. You can also specify key_use_agent to force the usage of an ssh-agent. In this case key_file or key_pem can also be specified to force the usage of a specific key in the ssh-agent.

    +

    Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.

    +

    If you set the ask_password option, rclone will prompt for a password when needed and no password has been configured.

    +

    Certificate-signed keys

    +

    With traditional key-based authentication, you configure your private key only, and the public key built into it will be used during the authentication process.

    +

    If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file.

    +

    Note: This is not the traditional public key paired with your private key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path in pubkey_file will not work.

    +

    Example:

    +
    [remote]
    +type = sftp
    +host = example.com
    +user = sftpuser
    +key_file = ~/id_rsa
    +pubkey_file = ~/id_rsa-cert.pub
    +

    If you concatenate a cert with a private key then you can specify the merged file in both places.

    +

    Note: the cert must come first in the file. e.g.

    +
    cat id_rsa-cert.pub id_rsa > merged_key
    +

    Host key validation

    +

    By default rclone will not check the server's host key for validation. This can allow an attacker to replace a server with their own and if you use password authentication then this can lead to that password being exposed.

    +

    Host key matching, using standard known_hosts files can be turned on by enabling the known_hosts_file option. This can point to the file maintained by OpenSSH or can point to a unique file.

    +

    e.g. using the OpenSSH known_hosts file:

    +
    [remote]
     type = sftp
     host = example.com
     user = sftpuser
    @@ -33521,14 +34261,14 @@ known_hosts_file = ~/.ssh/known_hosts

    The options md5sum_command and sha1_command can be used to customize the command to be executed for calculation of checksums. You can for example set a specific path to where md5sum and sha1sum executables are located, or use them to specify some other tools that print checksums in compatible format. The value can include command-line arguments, or even shell script blocks as with PowerShell. Rclone has subcommands md5sum and sha1sum that use compatible format, which means if you have an rclone executable on the server it can be used. As mentioned above, they will be automatically picked up if found in PATH, but if not you can set something like /path/to/rclone md5sum as the value of option md5sum_command to make sure a specific executable is used.

    Remote checksumming is recommended and enabled by default. First time rclone is using a SFTP remote, if options md5sum_command or sha1_command are not set, it will check if any of the default commands for each of them, as described above, can be used. The result will be saved in the remote configuration, so next time it will use the same. Value none will be set if none of the default commands could be used for a specific algorithm, and this algorithm will not be supported by the remote.

    Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote shell commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming entirely, or set shell_type to none to disable all functionality based on remote shell command execution.

    -

    Modification times and hashes

    +

    Modification times and hashes

    Modified times are stored on the server to 1 second precision.

    Modified times are used in syncing and are fully supported.

    Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.

    About command

    The about command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.

    SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about will fail.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to sftp (SSH/SFTP).

    --sftp-host

    SSH host to connect to.

    @@ -33679,7 +34419,7 @@ known_hosts_file = ~/.ssh/known_hosts
  • Type: SpaceSepList
  • Default:
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to sftp (SSH/SFTP).

    --sftp-known-hosts-file

    Optional path to known_hosts file.

    @@ -33981,7 +34721,16 @@ server_command = sudo /usr/libexec/openssh/sftp-server
  • Type: bool
  • Default: false
  • -

    Limitations

    +

    --sftp-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.

    The only ssh agent supported under Windows is Putty's pageant.

    The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found in this paper.

    @@ -34002,7 +34751,7 @@ server_command = sudo /usr/libexec/openssh/sftp-server

    The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:).

    You can't access to the shared printers from rclone, obviously.

    You can't use Anonymous access for logging in. You have to use the guest user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share. This doesn't apply to non-Windows OSes, such as Linux and macOS.

    -

    Configuration

    +

    Configuration

    Here is an example of making a SMB configuration.

    First run

    rclone config
    @@ -34077,7 +34826,7 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> d -

    Standard options

    +

    Standard options

    Here are the Standard options specific to smb (SMB / CIFS).

    --smb-host

    SMB server hostname to connect to.

    @@ -34138,7 +34887,7 @@ y/e/d> d
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to smb (SMB / CIFS).

    --smb-idle-timeout

    Max time before closing idle connections.

    @@ -34180,6 +34929,15 @@ y/e/d> d
  • Type: Encoding
  • Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
  • +

    --smb-description

    +

    Description of the remote

    +

    Properties:

    +

    Storj

    Storj is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner.

    Backend options

    @@ -34239,7 +34997,7 @@ y/e/d> d
  • S3 backend: secret encryption key is shared with the gateway
  • -

    Configuration

    +

    Configuration

    To make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -34336,7 +35094,7 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -

    Standard options

    +

    Standard options

    Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).

    --storj-provider

    Choose an authentication method.

    @@ -34415,6 +35173,17 @@ y/e/d> y
  • Type: string
  • Required: false
  • +

    Advanced options

    +

    Here are the Advanced options specific to storj (Storj Decentralized Cloud Storage).

    +

    --storj-description

    +

    Description of the remote

    +

    Properties:

    +

    Usage

    Paths are specified as remote:bucket (or remote: for the lsf command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    Once configured you can then use rclone like this.

    @@ -34469,7 +35238,7 @@ y/e/d> y
    rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/

    Or even between another cloud storage and Storj.

    rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
    -

    Limitations

    +

    Limitations

    rclone about is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    Known issues

    @@ -34477,7 +35246,7 @@ y/e/d> y

    To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536 just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc, or change the system-wide configuration, usually /etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to your operating system manual.

    SugarSync

    SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.

    -

    Configuration

    +

    Configuration

    The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -34542,15 +35311,15 @@ y/e/d> y

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.

    -

    Modification times and hashes

    +

    Modification times and hashes

    SugarSync does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work as rclone can read the time files were uploaded.

    -

    Restricted filename characters

    +

    Restricted filename characters

    SugarSync replaces the default restricted characters set except for DEL.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Deleting files

    +

    Deleting files

    Deleted files will be moved to the "Deleted items" folder by default.

    However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to sugarsync (Sugarsync).

    --sugarsync-app-id

    Sugarsync App ID.

    @@ -34591,7 +35360,7 @@ y/e/d> y
  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to sugarsync (Sugarsync).

    --sugarsync-refresh-token

    Sugarsync refresh token.

    @@ -34663,7 +35432,16 @@ y/e/d> y
  • Type: Encoding
  • Default: Slash,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    --sugarsync-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    rclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    Tardigrade

    @@ -34672,7 +35450,7 @@ y/e/d> y

    This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings

    Here is an example of how to make a remote called remote with the default setup. First run:

    rclone config
    @@ -34726,9 +35504,9 @@ y/e/d>
    rclone ls remote:

    To copy a local directory to an Uptobox directory called backup

    rclone copy /home/source remote:backup
    -

    Modification times and hashes

    +

    Modification times and hashes

    Uptobox supports neither modified times nor checksums. All timestamps will read as that set by --default-time.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -34752,7 +35530,7 @@ y/e/d>

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to uptobox (Uptobox).

    --uptobox-access-token

    Your access token.

    @@ -34764,7 +35542,7 @@ y/e/d>
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to uptobox (Uptobox).

    --uptobox-private

    Set to make uploaded files private

    @@ -34785,7 +35563,16 @@ y/e/d>
  • Type: Encoding
  • Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
  • -

    Limitations

    +

    --uptobox-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    Uptobox will delete inactive files that have not been accessed in 60 days.

    rclone about is not supported by this backend an overview of used space can however been seen in the uptobox web interface.

    Union

    @@ -34799,7 +35586,7 @@ y/e/d>

    Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.

    There is no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a union called remote for local folders. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -34936,7 +35723,7 @@ e/n/d/r/c/s/q> q

    To check if your upstream supports the field, run rclone about remote: [flags] and see if the required field exists.

    -

    Filters

    +

    Filters

    Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to union (Union merges the contents of several upstream fs).

    --union-min-free-space

    Minimum viable free space for lfs/eplfs policies.

    @@ -35093,13 +35880,22 @@ upstreams = /local:writeback remote:dir
  • Type: SizeSuffix
  • Default: 1Gi
  • +

    --union-description

    +

    Description of the remote

    +

    Properties:

    +

    Metadata

    Any metadata supported by the underlying remote is read and written.

    See the metadata docs for more info.

    WebDAV

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -35173,10 +35969,10 @@ y/e/d> y
    rclone ls remote:

    To copy a local directory to an WebDAV directory called backup

    rclone copy /home/source remote:backup
    -

    Modification times and hashes

    +

    Modification times and hashes

    Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.

    Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to webdav (WebDAV).

    --webdav-url

    URL of http host to connect to.

    @@ -35257,7 +36053,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to webdav (WebDAV).

    --webdav-bearer-token-command

    Command to run to get a bearer token.

    @@ -35312,9 +36108,27 @@ y/e/d> y
  • Type: SizeSuffix
  • Default: 10Mi
  • +

    --webdav-owncloud-exclude-shares

    +

    Exclude ownCloud shares

    +

    Properties:

    + +

    --webdav-description

    +

    Description of the remote

    +

    Properties:

    +

    Provider notes

    See below for notes on specific providers.

    -

    Fastmail Files

    +

    Fastmail Files

    Use https://webdav.fastmail.com/ or a subdirectory as the URL, and your Fastmail email username@domain.tld as the username. Follow this documentation to create an app password with access to Files (WebDAV) and use this as the password.

    Fastmail supports modified times using the X-OC-Mtime header.

    Owncloud

    @@ -35390,7 +36204,7 @@ vendor = other bearer_token_command = oidc-token XDC

    Yandex Disk

    Yandex Disk is a cloud storage solution created by Yandex.

    -

    Configuration

    +

    Configuration

    Here is an example of making a yandex configuration. First run

    rclone config

    This will guide you through an interactive setup process:

    @@ -35444,17 +36258,17 @@ y/e/d> y

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    rclone sync --interactive /home/local/directory remote:directory

    Yandex paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Modification times and hashes

    +

    Modification times and hashes

    Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

    The MD5 hash algorithm is natively supported by Yandex Disk.

    Emptying Trash

    If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

    -

    Restricted filename characters

    +

    Restricted filename characters

    The default restricted characters set are replaced.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to yandex (Yandex Disk).

    --yandex-client-id

    OAuth Client Id.

    @@ -35476,7 +36290,7 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to yandex (Yandex Disk).

    --yandex-token

    OAuth Access Token as a JSON blob.

    @@ -35526,13 +36340,22 @@ y/e/d> y
  • Type: Encoding
  • Default: Slash,Del,Ctl,InvalidUtf8,Dot
  • -

    Limitations

    +

    --yandex-description

    +

    Description of the remote

    +

    Properties:

    + +

    Limitations

    When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.

    Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.

    [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.

    Zoho Workdrive

    Zoho WorkDrive is a cloud storage solution created by Zoho.

    -

    Configuration

    +

    Configuration

    Here is an example of making a zoho configuration. First run

    rclone config

    This will guide you through an interactive setup process:

    @@ -35605,14 +36428,14 @@ y/e/d>

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    rclone sync --interactive /home/local/directory remote:directory

    Zoho paths may be as deep as required, eg remote:directory/subdirectory.

    -

    Modification times and hashes

    +

    Modification times and hashes

    Modified times are currently not supported for Zoho Workdrive

    No hash algorithms are supported.

    Usage information

    To view your current quota you can use the rclone about remote: command which will display your current usage.

    -

    Restricted filename characters

    +

    Restricted filename characters

    Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.

    -

    Standard options

    +

    Standard options

    Here are the Standard options specific to zoho (Zoho).

    --zoho-client-id

    OAuth Client Id.

    @@ -35671,7 +36494,7 @@ y/e/d> -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to zoho (Zoho).

    --zoho-token

    OAuth Access Token as a JSON blob.

    @@ -35712,6 +36535,15 @@ y/e/d>
  • Type: Encoding
  • Default: Del,Ctl,InvalidUtf8
  • +

    --zoho-description

    +

    Description of the remote

    +

    Properties:

    +

    Setting up your own client_id

    For Zoho we advise you to set up your own client_id. To do so you have to complete the following steps.

      @@ -35724,7 +36556,7 @@ y/e/d>

      Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so

      rclone sync --interactive /home/source /tmp/destination

      Will sync /home/source to /tmp/destination.

      -

      Configuration

      +

      Configuration

      For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.

      Modification times

      Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

      @@ -36095,7 +36927,7 @@ $ tree /tmp/b 0 file2

      NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.

      NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.

      -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to local (Local Disk).

      --local-nounc

      Disable UNC (long path names) conversion on Windows.

      @@ -36259,9 +37091,19 @@ $ tree /tmp/b
    1. Type: Encoding
    2. Default: Slash,Dot
    3. +

      --local-description

      +

      Description of the remote

      +

      Properties:

      +

      Metadata

      Depending on which OS is in use the local backend may return only some of the system metadata. Setting system metadata is supported on all OSes but setting user metadata is only supported on linux, freebsd, netbsd, macOS and Solaris. It is not supported on Windows yet (see pkg/attrs#47).

      User metadata is stored as extended attributes (which may not be supported by all file systems) under the "user.*" prefix.

      +

      Metadata is supported on files and directories.

      Here are the possible system metadata items for the local backend.

      @@ -36333,7 +37175,7 @@ $ tree /tmp/b

      See the metadata docs for more info.

      -

      Backend commands

      +

      Backend commands

      Here are the commands specific to the local backend.

      Run them with

      rclone backend COMMAND remote:
      @@ -36350,6 +37192,362 @@ $ tree /tmp/b
    4. "error": return an error based on option value
    5. Changelog

      +

      v1.66.0 - 2024-03-10

      +

      See commits

      + +

      v1.65.2 - 2024-01-24

      +

      See commits

      + +

      v1.65.1 - 2024-01-08

      +

      See commits

      +

      v1.65.0 - 2023-11-26

      See commits

      Bugs and Limitations

      -

      Limitations

      -

      Directory timestamps aren't preserved

      -

      Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.

      +

      Limitations

      +

      Directory timestamps aren't preserved on some backends

      +

      As of v1.66, rclone supports syncing directory modtimes, if the backend supports it. Some backends do not support it -- see overview for a complete list. Additionally, note that empty directories are not synced by default (this can be enabled with --create-empty-src-dirs.)

      Rclone struggles with millions of files in a directory/bucket

      Currently rclone loads each directory/bucket entirely into memory before using it. Since each rclone object takes 0.5k-1k of memory this can take a very long time and use a large amount of memory.

      Millions of files in a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those remotes do not segregate subdirectories within the bucket.

      @@ -43798,7 +44996,7 @@ THE SOFTWARE.
    6. Scott McGillivray
    7. Bjørn Erik Pedersen
    8. Lukas Loesche
    9. -
    10. emyarod
    11. +
    12. emyarod
    13. T.C. Ferguson
    14. Brandur
    15. Dario Giovannetti
    16. @@ -44546,6 +45744,27 @@ THE SOFTWARE.
    17. Alen Šiljak
    18. 你知道未来吗
    19. Abhinav Dhiman
    20. +
    21. halms
    22. +
    23. ben-ba
    24. +
    25. Eli Orzitzer
    26. +
    27. Anthony Metzidis
    28. +
    29. emyarod
    30. +
    31. keongalvin
    32. +
    33. rarspace01
    34. +
    35. Paul Stern
    36. +
    37. Nikhil Ahuja
    38. +
    39. Harshit Budhraja
    40. +
    41. Tera
    42. +
    43. Kyle Reynolds
    44. +
    45. Michael Eischer
    46. +
    47. Thomas Müller
    48. +
    49. DanielEgbers
    50. +
    51. Jack Provance
    52. +
    53. Gabriel Ramos
    54. +
    55. Dan McArdle
    56. +
    57. Joe Cai
    58. +
    59. Anders Swanson
    60. +
    61. huajin tong
    62. Contact the rclone project

      Forum

      diff --git a/MANUAL.md b/MANUAL.md index 13bee7b75..1570bb645 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,6 +1,6 @@ % rclone(1) User Manual % Nick Craig-Wood -% Nov 26, 2023 +% Mar 10, 2024 # Rclone syncs your files to cloud storage @@ -89,6 +89,7 @@ Rclone helps you: - Can use multi-threaded downloads to local disk - [Copy](https://rclone.org/commands/rclone_copy/) new or changed files to cloud storage - [Sync](https://rclone.org/commands/rclone_sync/) (one way) to make a directory identical +- [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally - [Move](https://rclone.org/commands/rclone_move/) files to cloud storage deleting the local after verification - [Check](https://rclone.org/commands/rclone_check/) hashes and for missing/extra files - [Mount](https://rclone.org/commands/rclone_mount/) your cloud storage as a network disk @@ -104,7 +105,6 @@ WebDAV or S3, that work out of the box.) - 1Fichier - Akamai Netstorage - Alibaba Cloud (Aliyun) Object Storage System (OSS) -- Amazon Drive - Amazon S3 - Backblaze B2 - Box @@ -127,6 +127,7 @@ WebDAV or S3, that work out of the box.) - Hetzner Storage Box - HiDrive - HTTP +- ImageKit - Internet Archive - Jottacloud - IBM COS S3 @@ -856,7 +857,6 @@ See the following for detailed instructions for * [1Fichier](https://rclone.org/fichier/) * [Akamai Netstorage](https://rclone.org/netstorage/) * [Alias](https://rclone.org/alias/) - * [Amazon Drive](https://rclone.org/amazonclouddrive/) * [Amazon S3](https://rclone.org/s3/) * [Backblaze B2](https://rclone.org/b2/) * [Box](https://rclone.org/box/) @@ -1039,6 +1039,15 @@ recently very efficiently like this: rclone copy --max-age 24h --no-traverse /path/to/src remote: + +Rclone will sync the modification times of files and directories if +the backend supports it. If metadata syncing is required then use the +`--metadata` flag. + +Note that the modification time and metadata for the root directory +will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +for more info. + **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. **Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without copying anything. @@ -1070,7 +1079,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -1084,6 +1093,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -1185,11 +1195,56 @@ the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory. +Rclone will sync the modification times of files and directories if +the backend supports it. If metadata syncing is required then use the +`--metadata` flag. + +Note that the modification time and metadata for the root directory +will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +for more info. + **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics **Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info. +# Logger Flags + +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, one per line, to the file name (or +stdout if it is `-`) supplied. What they write is described in the +help below. For example `--differ` will write all paths which are +present on both the source and destination but different. + +The `--combined` flag will write a file (or stdout) which contains all +file paths with a symbol and then a space and then the path to tell +you what happened to it. These are reminiscent of diff files. + +- `= path` means path was found in source and destination and was identical +- `- path` means path was missing on the source, so only in the destination +- `+ path` means path was missing on the destination, so only in the source +- `* path` means path was present in source and destination but different. +- `! path` means there was an error reading or hashing the source or dest. + +The `--dest-after` flag writes a list file using the same format flags +as [`lsf`](https://rclone.org/commands/rclone_lsf/#synopsis) (including [customizable options +for hash, modtime, etc.](https://rclone.org/commands/rclone_lsf/#synopsis)) +Conceptually it is similar to rsync's `--itemize-changes`, but not identical +-- it should output an accurate list of what will be on the destination +after the sync. + +Note that these logger flags have a few limitations, and certain scenarios +are not currently supported: + +- `--max-duration` / `CutoffModeHard` +- `--compare-dest` / `--copy-dest` +- server-side moves of an entire dir at once +- High-level retries, because there would be duplicates (use `--retries 1` to disable) +- Possibly some unusual error scenarios + +Note also that each file is logged during the sync, as opposed to after, so it +is most useful as a predictor of what SHOULD happen to each file +(which may or may not match what actually DID.) + ``` rclone sync source:path dest:path [flags] @@ -1198,8 +1253,24 @@ rclone sync source:path dest:path [flags] ## Options ``` + --absolute Put a leading / in front of path names + --combined string Make a combined report of changes to this file --create-empty-src-dirs Create empty source dirs on destination after sync + --csv Output in CSV format + --dest-after string Report all files that exist on the dest post-sync + --differ string Report all non-matching files to this file + -d, --dir-slash Append a slash to directory names (default true) + --dirs-only Only list directories + --error string Report all files with errors (hashing or reading) to this file + --files-only Only list files (default true) + -F, --format string Output format - see lsf help for details (default "p") + --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") -h, --help help for sync + --match string Report all matching files to this file + --missing-on-dst string Report all files missing from the destination to this file + --missing-on-src string Report all files missing from the source to this file + -s, --separator string Separator for the items in the format (default ";") + -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) ``` @@ -1217,7 +1288,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -1231,6 +1302,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -1250,6 +1322,7 @@ Flags just used for `rclone sync`. --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -1345,6 +1418,14 @@ whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. +Rclone will sync the modification times of files and directories if +the backend supports it. If metadata syncing is required then use the +`--metadata` flag. + +Note that the modification time and metadata for the root directory +will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +for more info. + **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. @@ -1378,7 +1459,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -1392,6 +1473,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -2783,6 +2865,11 @@ On each successive run it will: Changes include `New`, `Newer`, `Older`, and `Deleted` files. - Propagate changes on Path1 to Path2, and vice-versa. +Bisync is **in beta** and is considered an **advanced command**, so use with care. +Make sure you have read and understood the entire [manual](https://rclone.org/bisync) +(especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using, +or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/). + See [full bisync description](https://rclone.org/bisync/) for details. @@ -2793,20 +2880,31 @@ rclone bisync remote1:path1 remote2:path2 [flags] ## Options ``` - --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. - --check-filename string Filename for --check-access (default: RCLONE_TEST) - --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") - --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) - --filters-file string Read filtering patterns from a file - --force Bypass --max-delete safety check and run the sync. Consider using with --verbose - -h, --help help for bisync - --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --remove-empty-dirs Remove ALL empty directories at the final cleanup step. - --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! - -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. - --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) + --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote. + --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote. + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --compare string Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime') + --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num) + --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none") + --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict') + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --no-slow-hash Ignore listing checksums only on backends where they are slow + --recover Automatically recover from interruptions without requiring --resync. + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first. + --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none") + --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls. + --workdir string Use custom working dir - useful for testing. (default: {WORKDIR}) ``` @@ -2824,7 +2922,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -2838,6 +2936,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -3956,7 +4055,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -3970,6 +4069,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -4036,7 +4136,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li # rclone copyurl -Copy url content to dest. +Copy the contents of the URL supplied content to dest:path. ## Synopsis @@ -4044,11 +4144,14 @@ Copy url content to dest. Download a URL's content and copy it to the destination without saving it in temporary storage. -Setting `--auto-filename` will attempt to automatically determine the filename from the URL -(after any redirections) and used in the destination path. -With `--auto-filename-header` in -addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. -With `--print-filename` in addition, the resulting file name will be printed. +Setting `--auto-filename` will attempt to automatically determine the +filename from the URL (after any redirections) and used in the +destination path. + +With `--auto-filename-header` in addition, if a specific filename is +set in HTTP headers, it will be used instead of the name from the URL. +With `--print-filename` in addition, the resulting file name will be +printed. Setting `--no-clobber` will prevent overwriting file on the destination if there is one with the same name. @@ -4056,6 +4159,17 @@ destination if there is one with the same name. Setting `--stdout` or making the output file name `-` will cause the output to be written to standard output. +## Troublshooting + +If you can't get `rclone copyurl` to work then here are some things you can try: + +- `--disable-http2` rclone will use HTTP2 if available - try disabling it +- `--bind 0.0.0.0` rclone will use IPv6 if available - try disabling it +- `--bind ::0` to disable IPv4 +- `--user agent curl` - some sites have whitelists for curl's user-agent - try that +- Make sure the site works with `curl` directly + + ``` rclone copyurl https://example.com dest:path [flags] @@ -4627,7 +4741,7 @@ List all the remotes in the config file and defined in environment variables. rclone listremotes lists all the available remotes from the config file. -When used with the `--long` flag it lists the types too. +When used with the `--long` flag it lists the types and the descriptions too. ``` @@ -4638,7 +4752,7 @@ rclone listremotes [flags] ``` -h, --help help for listremotes - --long Show the type as well as names + --long Show the type and the description as well as names ``` @@ -4750,6 +4864,19 @@ those only (without traversing the whole directory structure): rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone copy --files-from-raw new_files /path/to/local remote:path +The default time format is `'2006-01-02 15:04:05'`. +[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with the `--time-format` flag. +Examples: + + rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)' + rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000' + rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00' + rclone lsf remote:path --format pt --time-format RFC3339 + rclone lsf remote:path --format pt --time-format DateOnly + rclone lsf remote:path --format pt --time-format max +`--time-format max` will automatically truncate '`2006-01-02 15:04:05.000000000`' +to the maximum precision supported by the remote. + Any of the filtering options can be applied to this command. @@ -4781,16 +4908,17 @@ rclone lsf remote:path [flags] ## Options ``` - --absolute Put a leading / in front of path names - --csv Output in CSV format - -d, --dir-slash Append a slash to directory names (default true) - --dirs-only Only list directories - --files-only Only list files - -F, --format string Output format - see help for details (default "p") - --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") - -h, --help help for lsf - -R, --recursive Recurse into the listing - -s, --separator string Separator for the items in the format (default ";") + --absolute Put a leading / in front of path names + --csv Output in CSV format + -d, --dir-slash Append a slash to directory names (default true) + --dirs-only Only list directories + --files-only Only list files + -F, --format string Output format - see help for details (default "p") + --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") + -h, --help help for lsf + -R, --recursive Recurse into the listing + -s, --separator string Separator for the items in the format (default ";") + -t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) ``` @@ -5271,12 +5399,21 @@ Mounting on macOS can be done either via [built-in NFS server](https://rclone.or FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server. -# NFS mount +#### Unicode Normalization + +It is highly recommended to keep the default of `--no-unicode-normalization=false` +for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity). + +### NFS mount This method spins up an NFS server using [serve nfs](https://rclone.org/commands/rclone_serve_nfs/) command and mounts it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to send SIGTERM signal to the rclone process using |kill| command to stop the mount. +Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler. +This should not be set too low or you may experience errors when trying to access files. The default is 1000000, +but consider lowering this limit if the server's system resource usage causes problems. + ### macFUSE Notes If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from @@ -5304,15 +5441,6 @@ As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats): This means that viewing files with various tools, notably macOS Finder, will cause rlcone to update the modification time of the file. This may make rclone upload a full new copy of the file. - -#### Unicode Normalization - -Rclone includes flags for unicode normalization with macFUSE that should be updated -for FUSE-T. See [this forum post](https://forum.rclone.org/t/some-unicode-forms-break-mount-on-macos-with-fuse-t/36403) -and [FUSE-T issue #16](https://github.com/macos-fuse-t/fuse-t/issues/16). The following -flag should be added to the `rclone mount` command. - - -o modules=iconv,from_code=UTF-8,to_code=UTF-8 #### Read Only mounts @@ -5785,6 +5913,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -5843,6 +5993,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -5855,7 +6006,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -5963,7 +6114,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -5977,6 +6128,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -6164,6 +6316,925 @@ See the [global flags page](https://rclone.org/flags/) for global options not li * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. +# rclone nfsmount + +Mount the remote as file system on a mountpoint. + +## Synopsis + +rclone nfsmount allows Linux, FreeBSD, macOS and Windows to +mount any of Rclone's cloud storage systems as a file system with +FUSE. + +First set up your remote using `rclone config`. Check it works with `rclone ls` etc. + +On Linux and macOS, you can run mount in either foreground or background (aka +daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag +to force background mode. On Windows you can run mount in foreground only, +the flag is ignored. + +In background mode rclone acts as a generic Unix mount program: the main +program starts, spawns background rclone process to setup and maintain the +mount, waits until success or timeout and exits with appropriate code +(killing the child process if it fails). + +On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount` +is an **empty** **existing** directory: + + rclone nfsmount remote:path/to/files /path/to/local/mount + +On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows) +for details. If foreground mount is used interactively from a console window, +rclone will serve the mount and occupy the console so another window should be +used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C. + +The following examples will mount to an automatically assigned drive, +to specific drive letter `X:`, to path `C:\path\parent\mount` +(where parent directory or drive must exist, and mount must **not** exist, +and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and +the last example will mount as network share `\\cloud\remote` and map it to an +automatically assigned drive: + + rclone nfsmount remote:path/to/files * + rclone nfsmount remote:path/to/files X: + rclone nfsmount remote:path/to/files C:\path\parent\mount + rclone nfsmount remote:path/to/files \\cloud\remote + +When the program ends while in foreground mode, either via Ctrl+C or receiving +a SIGINT or SIGTERM signal, the mount should be automatically stopped. + +When running in background mode the user will have to stop the mount manually: + + # Linux + fusermount -u /path/to/local/mount + # OS X + umount /path/to/local/mount + +The umount operation can fail, for example when the mountpoint is busy. +When that happens, it is the user's responsibility to stop the mount manually. + +The size of the mounted file system will be set according to information retrieved +from the remote, the same as returned by the [rclone about](https://rclone.org/commands/rclone_about/) +command. Remotes with unlimited storage may report the used size only, +then an additional 1 PiB of free space is assumed. If the remote does not +[support](https://rclone.org/overview/#optional-features) the about feature +at all, then 1 PiB is set as both the total and the free size. + +## Installing on Windows + +To run rclone nfsmount on Windows, you will need to +download and install [WinFsp](http://www.secfs.net/winfsp/). + +[WinFsp](https://github.com/winfsp/winfsp) is an open-source +Windows File System Proxy which makes it easy to write user space file +systems for Windows. It provides a FUSE emulation layer which rclone +uses combination with [cgofuse](https://github.com/winfsp/cgofuse). +Both of these packages are by Bill Zissimopoulos who was very helpful +during the implementation of rclone nfsmount for Windows. + +### Mounting modes on windows + +Unlike other operating systems, Microsoft Windows provides a different filesystem +type for network and fixed drives. It optimises access on the assumption fixed +disk drives are fast and reliable, while network drives have relatively high latency +and less reliability. Some settings can also be differentiated between the two types, +for example that Windows Explorer should just display icons and not create preview +thumbnails for image and video files on network drives. + +In most cases, rclone will mount the remote as a normal, fixed disk drive by default. +However, you can also choose to mount it as a remote network drive, often described +as a network share. If you mount an rclone remote using the default, fixed drive mode +and experience unexpected program errors, freezes or other issues, consider mounting +as a network drive instead. + +When mounting as a fixed disk drive you can either mount to an unused drive letter, +or to a path representing a **nonexistent** subdirectory of an **existing** parent +directory or drive. Using the special value `*` will tell rclone to +automatically assign the next available drive letter, starting with Z: and moving backward. +Examples: + + rclone nfsmount remote:path/to/files * + rclone nfsmount remote:path/to/files X: + rclone nfsmount remote:path/to/files C:\path\parent\mount + rclone nfsmount remote:path/to/files X: + +Option `--volname` can be used to set a custom volume name for the mounted +file system. The default is to use the remote name and path. + +To mount as network drive, you can add option `--network-mode` +to your nfsmount command. Mounting to a directory path is not supported in +this mode, it is a limitation Windows imposes on junctions, so the remote must always +be mounted to a drive letter. + + rclone nfsmount remote:path/to/files X: --network-mode + +A volume name specified with `--volname` will be used to create the network share path. +A complete UNC path, such as `\\cloud\remote`, optionally with path +`\\cloud\remote\madeup\path`, will be used as is. Any other +string will be used as the share part, after a default prefix `\\server\`. +If no volume name is specified then `\\server\share` will be used. +You must make sure the volume name is unique when you are mounting more than one drive, +or else the mount command will fail. The share name will treated as the volume label for +the mapped drive, shown in Windows Explorer etc, while the complete +`\\server\share` will be reported as the remote UNC path by +`net use` etc, just like a normal network drive mapping. + +If you specify a full network share UNC path with `--volname`, this will implicitly +set the `--network-mode` option, so the following two examples have same result: + + rclone nfsmount remote:path/to/files X: --network-mode + rclone nfsmount remote:path/to/files X: --volname \\server\share + +You may also specify the network share UNC path as the mountpoint itself. Then rclone +will automatically assign a drive letter, same as with `*` and use that as +mountpoint, and instead use the UNC path specified as the volume name, as if it were +specified with the `--volname` option. This will also implicitly set +the `--network-mode` option. This means the following two examples have same result: + + rclone nfsmount remote:path/to/files \\cloud\remote + rclone nfsmount remote:path/to/files * --volname \\cloud\remote + +There is yet another way to enable network mode, and to set the share path, +and that is to pass the "native" libfuse/WinFsp option directly: +`--fuse-flag --VolumePrefix=\server\share`. Note that the path +must be with just a single backslash prefix in this case. + + +*Note:* In previous versions of rclone this was the only supported method. + +[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) + +See also [Limitations](#limitations) section below. + +### Windows filesystem permissions + +The FUSE emulation layer on Windows must convert between the POSIX-based +permission model used in FUSE, and the permission model used in Windows, +based on access-control lists (ACL). + +The mounted filesystem will normally get three entries in its access-control list (ACL), +representing permissions for the POSIX permission scopes: Owner, group and others. +By default, the owner and group will be taken from the current user, and the built-in +group "Everyone" will be used to represent others. The user/group can be customized +with FUSE options "UserName" and "GroupName", +e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`. +The permissions on each entry will be set according to [options](#options) +`--dir-perms` and `--file-perms`, which takes a value in traditional Unix +[numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation). + +The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`, +i.e. read and write permissions to everyone. This means you will not be able +to start any programs from the mount. To be able to do that you must add +execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it +to everyone. If the program needs to write files, chances are you will +have to enable [VFS File Caching](#vfs-file-caching) as well (see also +[limitations](#limitations)). Note that the default write permission have +some restrictions for accounts other than the owner, specifically it lacks +the "write extended attributes", as explained next. + +The mapping of permissions is not always trivial, and the result you see in +Windows Explorer may not be exactly like you expected. For example, when setting +a value that includes write access for the group or others scope, this will be +mapped to individual permissions "write attributes", "write data" and +"append data", but not "write extended attributes". Windows will then show this +as basic permission "Special" instead of "Write", because "Write" also covers +the "write extended attributes" permission. When setting digit 0 for group or +others, to indicate no permissions, they will still get individual permissions +"read attributes", "read extended attributes" and "read permissions". This is +done for compatibility reasons, e.g. to allow users without additional +permissions to be able to read basic metadata about files like in Unix. + +WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity", +that allows the complete specification of file security descriptors using +[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format). +With this you get detailed control of the resulting permissions, compared +to use of the POSIX permissions described above, and no additional permissions +will be added automatically for compatibility with Unix. Some example use +cases will following. + +If you set POSIX permissions for only allowing access to the owner, +using `--file-perms 0600 --dir-perms 0700`, the user group and the built-in +"Everyone" group will still be given some special permissions, as described +above. Some programs may then (incorrectly) interpret this as the file being +accessible by everyone, for example an SSH client may warn about "unprotected +private key file". You can work around this by specifying +`-o FileSecurity="D:P(A;;FA;;;OW)"`, which sets file all access (FA) to the +owner (OW), and nothing else. + +When setting write permissions then, except for the owner, this does not +include the "write extended attributes" permission, as mentioned above. +This may prevent applications from writing to files, giving permission denied +error instead. To set working write permissions for the built-in "Everyone" +group, similar to what it gets by default but with the addition of the +"write extended attributes", you can specify +`-o FileSecurity="D:P(A;;FRFW;;;WD)"`, which sets file read (FR) and file +write (FW) to everyone (WD). If file execute (FX) is also needed, then change +to `-o FileSecurity="D:P(A;;FRFWFX;;;WD)"`, or set file all access (FA) to +get full access permissions, including delete, with +`-o FileSecurity="D:P(A;;FA;;;WD)"`. + +### Windows caveats + +Drives created as Administrator are not visible to other accounts, +not even an account that was elevated to Administrator with the +User Account Control (UAC) feature. A result of this is that if you mount +to a drive letter from a Command Prompt run as Administrator, and then try +to access the same drive from Windows Explorer (which does not run as +Administrator), you will not be able to see the mounted drive. + +If you don't need to access the drive from applications running with +administrative privileges, the easiest way around this is to always +create the mount from a non-elevated command prompt. + +To make mapped drives available to the user account that created them +regardless if elevated or not, there is a special Windows setting called +[linked connections](https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry) +that can be enabled. + +It is also possible to make a drive mount available to everyone on the system, +by running the process creating it as the built-in SYSTEM account. +There are several ways to do this: One is to use the command-line +utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec), +from Microsoft's Sysinternals suite, which has option `-s` to start +processes as the SYSTEM account. Another alternative is to run the mount +command from a Windows Scheduled Task, or a Windows Service, configured +to run as the SYSTEM account. A third alternative is to use the +[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)). +Read more in the [install documentation](https://rclone.org/install/). +Note that when running rclone as another user, it will not use +the configuration file from your profile unless you tell it to +with the [`--config`](https://rclone.org/docs/#config-config-file) option. +Note also that it is now the SYSTEM account that will have the owner +permissions, and other accounts will have permissions according to the +group or others scopes. As mentioned above, these will then not get the +"write extended attributes" permission, and this may prevent writing to +files. You can work around this with the FileSecurity option, see +example above. + +Note that mapping to a directory path, instead of a drive letter, +does not suffer from the same limitations. + +## Mounting on macOS + +Mounting on macOS can be done either via [built-in NFS server](https://rclone.org/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/) +(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional +FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system +which "mounts" via an NFSv4 local server. + +#### Unicode Normalization + +It is highly recommended to keep the default of `--no-unicode-normalization=false` +for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity). + +### NFS mount + +This method spins up an NFS server using [serve nfs](https://rclone.org/commands/rclone_serve_nfs/) command and mounts +it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to +send SIGTERM signal to the rclone process using |kill| command to stop the mount. + +Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler. +This should not be set too low or you may experience errors when trying to access files. The default is 1000000, +but consider lowering this limit if the server's system resource usage causes problems. + +### macFUSE Notes + +If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from +the website, rclone will locate the macFUSE libraries without any further intervention. +If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager, +the following addition steps are required. + + sudo mkdir /usr/local/lib + cd /usr/local/lib + sudo ln -s /opt/local/lib/libfuse.2.dylib + +### FUSE-T Limitations, Caveats, and Notes + +There are some limitations, caveats, and notes about how it works. These are current as +of FUSE-T version 1.0.14. + +#### ModTime update on read + +As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats): + +> File access and modification times cannot be set separately as it seems to be an +> issue with the NFS client which always modifies both. Can be reproduced with +> 'touch -m' and 'touch -a' commands + +This means that viewing files with various tools, notably macOS Finder, will cause rlcone +to update the modification time of the file. This may make rclone upload a full new copy +of the file. + +#### Read Only mounts + +When mounting with `--read-only`, attempts to write to files will fail *silently* as +opposed to with a clear warning as in macFUSE. + +## Limitations + +Without the use of `--vfs-cache-mode` this can only write files +sequentially, it can only seek when reading. This means that many +applications won't work with their files on an rclone mount without +`--vfs-cache-mode writes` or `--vfs-cache-mode full`. +See the [VFS File Caching](#vfs-file-caching) section for more info. +When using NFS mount on macOS, if you don't specify |--vfs-cache-mode| +the mount point will be read-only. + +The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) +do not support the concept of empty directories, so empty +directories will have a tendency to disappear once they fall out of +the directory cache. + +When `rclone mount` is invoked on Unix with `--daemon` flag, the main rclone +program will wait for the background mount to become ready or until the timeout +specified by the `--daemon-wait` flag. On Linux it can check mount status using +ProcFS so the flag in fact sets **maximum** time to wait, while the real wait +can be less. On macOS / BSD the time to wait is constant and the check is +performed only at the end. We advise you to set wait time on macOS reasonably. + +Only supported on Linux, FreeBSD, OS X and Windows at the moment. + +## rclone nfsmount vs rclone sync/copy + +File systems expect things to be 100% reliable, whereas cloud storage +systems are a long way from 100% reliable. The rclone sync/copy +commands cope with this with lots of retries. However rclone nfsmount +can't use retries in the same way without making local copies of the +uploads. Look at the [VFS File Caching](#vfs-file-caching) +for solutions to make nfsmount more reliable. + +## Attribute caching + +You can use the flag `--attr-timeout` to set the time the kernel caches +the attributes (size, modification time, etc.) for directory entries. + +The default is `1s` which caches files just long enough to avoid +too many callbacks to rclone from the kernel. + +In theory 0s should be the correct value for filesystems which can +change outside the control of the kernel. However this causes quite a +few problems such as +[rclone using too much memory](https://github.com/rclone/rclone/issues/2157), +[rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) +and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147). + +The kernel can cache the info about a file for the time given by +`--attr-timeout`. You may see corruption if the remote file changes +length during this window. It will show up as either a truncated file +or a file with garbage on the end. With `--attr-timeout 1s` this is +very unlikely but not impossible. The higher you set `--attr-timeout` +the more likely it is. The default setting of "1s" is the lowest +setting which mitigates the problems above. + +If you set it higher (`10s` or `1m` say) then the kernel will call +back to rclone less often making it more efficient, however there is +more chance of the corruption issue above. + +If files don't change on the remote outside of the control of rclone +then there is no chance of corruption. + +This is the same as setting the attr_timeout option in mount.fuse. + +## Filters + +Note that all the rclone filters can be used to select a subset of the +files to be visible in the mount. + +## systemd + +When running rclone nfsmount as a systemd service, it is possible +to use Type=notify. In this case the service will enter the started state +after the mountpoint has been successfully set up. +Units having the rclone nfsmount service specified as a requirement +will see all files and folders immediately in this mode. + +Note that systemd runs mount units without any environment variables including +`PATH` or `HOME`. This means that tilde (`~`) expansion will not work +and you should provide `--config` and `--cache-dir` explicitly as absolute +paths via rclone arguments. +Since mounting requires the `fusermount` program, rclone will use the fallback +PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount` +is present on this PATH. + +## Rclone as Unix mount helper + +The core Unix program `/bin/mount` normally takes the `-t FSTYPE` argument +then runs the `/sbin/mount.FSTYPE` helper program passing it mount options +as `-o key=val,...` or `--opt=...`. Automount (classic or systemd) behaves +in a similar way. + +rclone by default expects GNU-style flags `--key val`. To run it as a mount +helper you should symlink rclone binary to `/sbin/mount.rclone` and optionally +`/usr/bin/rclonefs`, e.g. `ln -s /usr/bin/rclone /sbin/mount.rclone`. +rclone will detect it and translate command-line arguments appropriately. + +Now you can run classic mounts like this: +``` +mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem +``` + +or create systemd mount units: +``` +# /etc/systemd/system/mnt-data.mount +[Unit] +Description=Mount for /mnt/data +[Mount] +Type=rclone +What=sftp1:subdir +Where=/mnt/data +Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone +``` + +optionally accompanied by systemd automount unit +``` +# /etc/systemd/system/mnt-data.automount +[Unit] +Description=AutoMount for /mnt/data +[Automount] +Where=/mnt/data +TimeoutIdleSec=600 +[Install] +WantedBy=multi-user.target +``` + +or add in `/etc/fstab` a line like +``` +sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0 +``` + +or use classic Automountd. +Remember to provide explicit `config=...,cache-dir=...` as a workaround for +mount units being run without `HOME`. + +Rclone in the mount helper mode will split `-o` argument(s) by comma, replace `_` +by `-` and prepend `--` to get the command-line flags. Options containing commas +or spaces can be wrapped in single or double quotes. Any inner quotes inside outer +quotes of the same type should be doubled. + +Mount option syntax includes a few extra options treated specially: + +- `env.NAME=VALUE` will set an environment variable for the mount process. + This helps with Automountd and Systemd.mount which don't allow setting + custom environment for mount helpers. + Typically you will use `env.HTTPS_PROXY=proxy.host:3128` or `env.HOME=/root` +- `command=cmount` can be used to run `cmount` or any other rclone command + rather than the default `mount`. +- `args2env` will pass mount options to the mount helper running in background + via environment variables instead of command line arguments. This allows to + hide secrets from such commands as `ps` or `pgrep`. +- `vv...` will be transformed into appropriate `--verbose=N` +- standard mount options like `x-systemd.automount`, `_netdev`, `nosuid` and alike + are intended only for Automountd and ignored by rclone. +## VFS - Virtual File System + +This command uses the VFS layer. This adapts the cloud storage objects +that rclone uses into something which looks much more like a disk +filing system. + +Cloud storage objects have lots of properties which aren't like disk +files - you can't extend them or write to the middle of them, so the +VFS layer has to deal with that. Because there is no one right way of +doing this there are various options explained below. + +The VFS layer also implements a directory cache - this caches info +about files and directories (but not the data) in memory. + +## VFS Directory Cache + +Using the `--dir-cache-time` flag, you can control how long a +directory should be considered up to date and not refreshed from the +backend. Changes made through the VFS will appear immediately or +invalidate the cache. + + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + +However, changes made directly on the cloud storage by the web +interface or a different copy of rclone will only be picked up once +the directory cache expires if the backend configured does not support +polling for changes. If the backend supports polling, changes will be +picked up within the polling interval. + +You can send a `SIGHUP` signal to rclone for it to flush all +directory caches, regardless of how old they are. Assuming only one +rclone instance is running, you can reset the cache like this: + + kill -SIGHUP $(pidof rclone) + +If you configure rclone with a [remote control](/rc) then you can use +rclone rc to flush the whole directory cache: + + rclone rc vfs/forget + +Or individual files or directories: + + rclone rc vfs/forget file=path/to/file dir=path/to/dir + +## VFS File Buffering + +The `--buffer-size` flag determines the amount of memory, +that will be used to buffer data in advance. + +Each open file will try to keep the specified amount of data in memory +at all times. The buffered data is bound to one open file and won't be +shared. + +This flag is a upper limit for the used memory per open file. The +buffer will only use memory for data that is downloaded but not not +yet read. If the buffer is empty, only a small amount of memory will +be used. + +The maximum memory used by rclone for buffering can be up to +`--buffer-size * open files`. + +## VFS File Caching + +These flags control the VFS file caching options. File caching is +necessary to make the VFS layer appear compatible with a normal file +system. It can be disabled at the cost of some compatibility. + +For example you'll need to enable VFS caching if you want to read and +write simultaneously to a file. See below for more details. + +Note that the VFS cache is separate from the cache backend and you may +find that you need one or the other or both. + + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + +If run with `-vv` rclone will print the location of the file cache. The +files are stored in the user cache file area which is OS dependent but +can be controlled with `--cache-dir` or setting the appropriate +environment variable. + +The cache has 4 different modes selected by `--vfs-cache-mode`. +The higher the cache mode the more compatible rclone becomes at the +cost of using disk space. + +Note that files are written back to the remote only when they are +closed and if they haven't been accessed for `--vfs-write-back` +seconds. If rclone is quit or dies with files that haven't been +uploaded, these will be uploaded next time rclone is run with the same +flags. + +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. + +The `--vfs-cache-max-age` will evict files from the cache +after the set time since last access has passed. The default value of +1 hour will start evicting files from cache that haven't been accessed +for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 +and will wait for 1 more hour before evicting. Specify the time with +standard notation, s, m, h, d, w . + +You **should not** run two copies of rclone using the same VFS cache +with the same or overlapping remotes if using `--vfs-cache-mode > off`. +This can potentially cause data corruption if you do. You can work +around this by giving each rclone its own cache hierarchy with +`--cache-dir`. You don't need to worry about this if the remotes in +use don't overlap. + +### --vfs-cache-mode off + +In this mode (the default) the cache will read directly from the remote and write +directly to the remote without caching anything on disk. + +This will mean some operations are not possible + + * Files can't be opened for both read AND write + * Files opened for write can't be seeked + * Existing files opened for write must have O_TRUNC set + * Files open for read with O_TRUNC will be opened write only + * Files open for write only will behave as if O_TRUNC was supplied + * Open modes O_APPEND, O_TRUNC are ignored + * If an upload fails it can't be retried + +### --vfs-cache-mode minimal + +This is very similar to "off" except that files opened for read AND +write will be buffered to disk. This means that files opened for +write will be a lot more compatible, but uses the minimal disk space. + +These operations are not possible + + * Files opened for write only can't be seeked + * Existing files opened for write must have O_TRUNC set + * Files opened for write only will ignore O_APPEND, O_TRUNC + * If an upload fails it can't be retried + +### --vfs-cache-mode writes + +In this mode files opened for read only are still read directly from +the remote, write only and read/write files are buffered to disk +first. + +This mode should support all normal file system operations. + +If an upload fails it will be retried at exponentially increasing +intervals up to 1 minute. + +### --vfs-cache-mode full + +In this mode all reads and writes are buffered to and from disk. When +data is read from the remote this is buffered to disk as well. + +In this mode the files in the cache will be sparse files and rclone +will keep track of which bits of the files it has downloaded. + +So if an application only reads the starts of each file, then rclone +will only buffer the start of the file. These files will appear to be +their full size in the cache, but they will be sparse files with only +the data that has been downloaded present in them. + +This mode should support all normal file system operations and is +otherwise identical to `--vfs-cache-mode` writes. + +When reading a file rclone will read `--buffer-size` plus +`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory +whereas the `--vfs-read-ahead` is buffered on disk. + +When using this mode it is recommended that `--buffer-size` is not set +too large and `--vfs-read-ahead` is set large if required. + +**IMPORTANT** not all file systems support sparse files. In particular +FAT/exFAT do not. Rclone will perform very badly if the cache +directory is on a filesystem which doesn't support sparse files and it +will log an ERROR message if one is detected. + +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + +## VFS Chunked Reading + +When rclone reads files from a remote it reads them in chunks. This +means that rather than requesting the whole file rclone reads the +chunk specified. This can reduce the used download quota for some +remotes by requesting only chunks from the remote that are actually +read, at the cost of an increased number of requests. + +These flags control the chunking: + + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + +Rclone will start reading a chunk of size `--vfs-read-chunk-size`, +and then double the size for each read. When `--vfs-read-chunk-size-limit` is +specified, and greater than `--vfs-read-chunk-size`, the chunk size for each +open file will get doubled only until the specified value is reached. If the +value is "off", which is the default, the limit is disabled and the chunk size +will grow indefinitely. + +With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. +When `--vfs-read-chunk-size-limit 500M` is specified, the result would be +0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. + +Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. + +## VFS Performance + +These flags may be used to enable/disable features of the VFS for +performance or other reasons. See also the [chunked reading](#vfs-chunked-reading) +feature. + +In particular S3 and Swift benefit hugely from the `--no-modtime` flag +(or use `--use-server-modtime` for a slightly different effect) as each +read of the modification time takes a transaction. + + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. + +Sometimes rclone is delivered reads or writes out of order. Rather +than seeking rclone will wait a short time for the in sequence read or +write to come in. These flags only come into effect when not using an +on disk cache file. + + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + +When using VFS write caching (`--vfs-cache-mode` with value writes or full), +the global flag `--transfers` can be set to adjust the number of parallel uploads of +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). + + --transfers int Number of file transfers to run in parallel (default 4) + +## VFS Case Sensitivity + +Linux file systems are case-sensitive: two files can differ only +by case, and the exact case must be used when opening a file. + +File systems in modern Windows are case-insensitive but case-preserving: +although existing files can be opened using any case, the exact case used +to create the file is preserved and available for programs to query. +It is not allowed for two files in the same directory to differ only by case. + +Usually file systems on macOS are case-insensitive. It is possible to make macOS +file systems case-sensitive but that is not the default. + +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. + +The user may specify a file name to open/delete/rename/etc with a case +different than what is stored on the remote. If an argument refers +to an existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the same +name is not found but a name differing only by case exists, rclone will +transparently fixup the name. This fixup happens only when an existing file +is requested. Case sensitivity of file names created anew by rclone is +controlled by the underlying remote. + +Note that case sensitivity of the operating system running rclone (the target) +may differ from case sensitivity of a file system presented by rclone (the source). +The flag controls whether "fixup" is performed to satisfy the target. + +If the flag is not provided on the command line, then its default value depends +on the operating system where rclone runs: "true" on Windows and macOS, "false" +otherwise. If the flag is provided without a value, then it is "true". + +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + +## Alternate report of used bytes + +Some backends, most notably S3, do not report the amount of bytes used. +If you need this information to be available when running `df` on the +filesystem, then pass the flag `--vfs-used-is-size` to rclone. +With this flag set, instead of relying on the backend to report this +information, rclone will scan the whole remote similar to `rclone size` +and compute the total used space itself. + +_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +result is accurate. However, this is very inefficient and may cost lots of API +calls resulting in extra charges. Use it as a last resort and only with caching. + + +``` +rclone nfsmount remote:path /path/to/mountpoint [flags] +``` + +## Options + +``` + --addr string IPaddress:Port or :Port to bind server to + --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows) + --allow-other Allow access to other users (not supported on Windows) + --allow-root Allow access to root user (not supported on Windows) + --async-read Use asynchronous reads (not supported on Windows) (default true) + --attr-timeout Duration Time for which file/directory attributes are cached (default 1s) + --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows) + --daemon-timeout Duration Time limit for rclone to respond to kernel (not supported on Windows) (default 0s) + --daemon-wait Duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s) + --debug-fuse Debug the FUSE internals - needs -v + --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows) + --devname string Set the device name - default is remote:path + --dir-cache-time Duration Time to cache directory entries for (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required) + --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) + -h, --help help for nfsmount + --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki) + --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset) + --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only) + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) + --no-checksum Don't compare checksums on up/download + --no-modtime Don't read/write the modification time (can speed things up) + --no-seek Don't allow seeking in files + --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true) + --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) + -o, --option stringArray Option for libfuse/WinFsp (repeat if required) + --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) + --read-only Only allow read-only access + --sudo Use sudo to run the mount command as root. + --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) + --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) + --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) + --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-refresh Refreshes the directory cache recursively in the background on start + --vfs-used-is-size rclone size Use the rclone size algorithm for Used size + --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) + --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) + --volname string Set the volume name (supported on Windows and OSX only) + --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) +``` + + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +# SEE ALSO + +* [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + # rclone obscure Obscure password for use in the rclone config file. @@ -7052,6 +8123,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -7097,6 +8190,7 @@ rclone serve dlna remote:path [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -7109,7 +8203,7 @@ rclone serve dlna remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -7506,6 +8600,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -7569,6 +8685,7 @@ rclone serve docker [flags] --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -7581,7 +8698,7 @@ rclone serve docker [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -7962,6 +9079,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -7994,7 +9133,7 @@ together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program -[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn @@ -8091,6 +9230,7 @@ rclone serve ftp remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default "anonymous") + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -8103,7 +9243,7 @@ rclone serve ftp remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -8583,6 +9723,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -8615,7 +9777,7 @@ together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program -[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn @@ -8721,6 +9883,7 @@ rclone serve http remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -8733,7 +9896,7 @@ rclone serve http remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -8794,7 +9957,9 @@ NFS mount over local network, you need to specify the listening address and port Modifying files through NFS protocol requires VFS caching. Usually you will need to specify `--vfs-cache-mode` in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode, -the mount will be read-only. +the mount will be read-only. Note also that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the caching handler. +This should not be set too low or you may experience errors when trying to access files. The default is `1000000`, but consider lowering this limit if +the server's system resource usage causes problems. To serve NFS over the network use following command: @@ -9121,6 +10286,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -9155,6 +10342,7 @@ rclone serve nfs remote:path [flags] --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for nfs + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) --no-checksum Don't compare checksums on up/download --no-modtime Don't read/write the modification time (can speed things up) --no-seek Don't allow seeking in files @@ -9162,6 +10350,7 @@ rclone serve nfs remote:path [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -9174,7 +10363,7 @@ rclone serve nfs remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -9894,6 +11083,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -9948,6 +11159,7 @@ rclone serve s3 remote:path [flags] --server-write-timeout Duration Timeout for server writing data (default 1h0m0s) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -9960,7 +11172,7 @@ rclone serve s3 remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -10371,6 +11583,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -10403,7 +11637,7 @@ together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program -[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn @@ -10500,6 +11734,7 @@ rclone serve sftp remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -10512,7 +11747,7 @@ rclone serve sftp remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -11021,6 +12256,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -11053,7 +12310,7 @@ together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program -[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn @@ -11161,6 +12418,7 @@ rclone serve webdav remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -11173,7 +12431,7 @@ rclone serve webdav remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -11978,18 +13236,21 @@ This can be used when scripting to make aged backups efficiently, e.g. ## Metadata support {#metadata} -Metadata is data about a file which isn't the contents of the file. -Normally rclone only preserves the modification time and the content -(MIME) type where possible. +Metadata is data about a file (or directory) which isn't the contents +of the file (or directory). Normally rclone only preserves the +modification time and the content (MIME) type where possible. -Rclone supports preserving all the available metadata on files (not -directories) when using the `--metadata` or `-M` flag. +Rclone supports preserving all the available metadata on files and +directories when using the `--metadata` or `-M` flag. Exactly what metadata is supported and what that support means depends on the backend. Backends that support metadata have a metadata section in their docs and are listed in the [features table](https://rclone.org/overview/#features) (Eg [local](https://rclone.org/local/#metadata), [s3](/s3/#metadata)) +Some backends don't support metadata, some only support metadata on +files and some support metadata on both files and directories. + Rclone only supports a one-time sync of metadata. This means that metadata will be synced from the source object to the destination object only when the source object has changed and needs to be @@ -12010,6 +13271,14 @@ The [--metadata-mapper](#metadata-mapper) flag can be used to pass the name of a program in which can transform metadata when it is being copied from source to destination. +Rclone supports `--metadata-set` and `--metadata-mapper` when doing +sever side `Move` and server side `Copy`, but not when doing server +side `DirMove` (renaming a directory) as this would involve recursing +into the directory. Note that you can disable `DirMove` with +`--disable DirMove` and rclone will revert back to using `Move` for +each individual object where `--metadata-set` and `--metadata-mapper` +are supported. + ### Types of metadata Metadata is divided into two type. System metadata and User metadata. @@ -12639,6 +13908,26 @@ triggering follow-on actions if data was copied, or skipping if not. NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly! +### --fix-case ### + +Normally, a sync to a case insensitive dest (such as macOS / Windows) will +not result in a matching filename if the source and dest filenames have +casing differences but are otherwise identical. For example, syncing `hello.txt` +to `HELLO.txt` will normally result in the dest filename remaining `HELLO.txt`. +If `--fix-case` is set, then `HELLO.txt` will be renamed to `hello.txt` +to match the source. + +NB: +- directory names with incorrect casing will also be fixed +- `--fix-case` will be ignored if `--immutable` is set +- using `--local-case-sensitive` instead is not advisable; +it will cause `HELLO.txt` to get deleted! +- the old dest filename must not be excluded by filters. +Be especially careful with [`--files-from`](https://rclone.org/filtering/#files-from-read-list-of-source-file-names), +which does not respect [`--ignore-case`](https://rclone.org/filtering/#ignore-case-make-searches-case-insensitive)! +- on remotes that do not support server-side move, `--fix-case` will require +downloading the file and re-uploading it. To avoid this, do not use `--fix-case`. + ### --fs-cache-expire-duration=TIME When using rclone via the API rclone caches created remotes for 5 @@ -13072,10 +14361,10 @@ some context for the `Metadata` which may be important. - `SrcFsType` is the name of the source backend. - `DstFs` is the config string for the remote that the object is being copied to - `DstFsType` is the name of the destination backend. -- `Remote` is the path of the file relative to the root. -- `Size`, `MimeType`, `ModTime` are attributes of the file. +- `Remote` is the path of the object relative to the root. +- `Size`, `MimeType`, `ModTime` are attributes of the object. - `IsDir` is `true` if this is a directory (not yet implemented). -- `ID` is the source `ID` of the file if known. +- `ID` is the source `ID` of the object if known. - `Metadata` is the backend specific metadata as described in the backend docs. ```json @@ -13145,7 +14434,7 @@ json.dump(o, sys.stdout, indent="\t") ``` You can find this example (slightly expanded) in the rclone source code at -[bin/test_metadata_mapper.py](https://github.com/rclone/rclone/blob/master/test_metadata_mapper.py). +[bin/test_metadata_mapper.py](https://github.com/rclone/rclone/blob/master/bin/test_metadata_mapper.py). If you want to see the input to the metadata mapper and the output returned from it in the log you can use `-vv --dump mapper`. @@ -13205,7 +14494,7 @@ use multiple threads to transfer the file (default 256M). Capable backends are marked in the [overview](https://rclone.org/overview/#optional-features) as `MultithreadUpload`. (They -need to implement either the `OpenWriterAt` or `OpenChunkedWriter` +need to implement either the `OpenWriterAt` or `OpenChunkWriter` internal interfaces). These include include, `local`, `s3`, `azureblob`, `b2`, `oracleobjectstorage` and `smb` at the time of writing. @@ -13318,6 +14607,11 @@ files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also (e.g. the Google Drive client). +### --no-update-dir-modtime ### + +When using this flag, rclone won't update modification times of remote +directories if they are incorrect as it would normally. + ### --order-by string ### The `--order-by` flag controls the order in which files in the backlog @@ -14415,7 +15709,7 @@ For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): - rclone authorize "amazon cloud drive" + rclone authorize "dropbox" Then paste the result below: result> @@ -14424,7 +15718,7 @@ result> Then on your main desktop machine ``` -rclone authorize "amazon cloud drive" +rclone authorize "dropbox" If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... @@ -15024,7 +16318,7 @@ E.g. for an alternative `filter-file.txt`: - * Files `file1.jpg`, `file3.png` and `file2.avi` are listed whilst -`secret17.jpg` and files without the suffix .jpg` or `.png` are excluded. +`secret17.jpg` and files without the suffix `.jpg` or `.png` are excluded. E.g. for an alternative `filter-file.txt`: @@ -16001,6 +17295,26 @@ See the [config password](https://rclone.org/commands/rclone_config_password/) c **Authentication is required for this call.** +### config/paths: Reads the config file path and other important paths. {#config-paths} + +Returns a JSON object with the following keys: + +- config: path to config file +- cache: path to root of cache directory +- temp: path to root of temporary directory + +Eg + + { + "cache": "/home/USER/.cache/rclone", + "config": "/home/USER/.rclone.conf", + "temp": "/tmp" + } + +See the [config paths](https://rclone.org/commands/rclone_config_paths/) command for more information on the above. + +**Authentication is required for this call.** + ### config/providers: Shows how providers are configured in the config file. {#config-providers} Returns a JSON object: @@ -16786,6 +18100,50 @@ This command does not have a command line equivalent so use this instead: rclone rc --loopback operations/fsinfo fs=remote: +### operations/hashsum: Produces a hashsum file for all the objects in the path. {#operations-hashsum} + +Produces a hash file for all the objects in the path using the hash +named. The output is in the same format as the standard +md5sum/sha1sum tool. + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" for the source, "/" for local filesystem + - this can point to a file and just that file will be returned in the listing. +- hashType - type of hash to be used +- download - check by downloading rather than with hash (boolean) +- base64 - output the hashes in base64 rather than hex (boolean) + +If you supply the download flag, it will download the data from the +remote and create the hash on the fly. This can be useful for remotes +that don't support the given hash or if you really want to check all +the data. + +Note that if you wish to supply a checkfile to check hashes against +the current files then you should use operations/check instead of +operations/hashsum. + +Returns: + +- hashsum - array of strings of the hashes +- hashType - type of hash used + +Example: + + $ rclone rc --loopback operations/hashsum fs=bin hashType=MD5 download=true base64=true + { + "hashType": "md5", + "hashsum": [ + "WTSVLpuiXyJO_kGzJerRLg== backend-versions.sh", + "v1b_OlWCJO9LtNq3EIKkNQ== bisect-go-rclone.sh", + "VHbmHzHh4taXzgag8BAIKQ== bisect-rclone.sh", + ] + } + +See the [hashsum](https://rclone.org/commands/rclone_hashsum/) command for more information on the above. + +**Authentication is required for this call.** + ### operations/list: List the given remote and path in JSON format {#operations-list} This takes the following parameters: @@ -17152,7 +18510,9 @@ This takes the following parameters - ignoreListingChecksum - Do not use checksums for listings - resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync. Use at your own risk! -- workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync) +- workdir - server directory for history files (default: `~/.cache/rclone/bisync`) +- backupdir1 - --backup-dir for Path1. Must be a non-overlapping path on the same remote. +- backupdir2 - --backup-dir for Path2. Must be a non-overlapping path on the same remote. - noCleanup - retain working files See [bisync command help](https://rclone.org/commands/rclone_bisync/) @@ -17552,7 +18912,6 @@ Here is an overview of the major features of each cloud storage system. | ---------------------------- |:-----------------:|:-------:|:----------------:|:---------------:|:---------:|:--------:| | 1Fichier | Whirlpool | - | No | Yes | R | - | | Akamai Netstorage | MD5, SHA256 | R/W | No | No | R | - | -| Amazon Drive | MD5 | - | Yes | No | R | - | | Amazon S3 (or S3 compatible) | MD5 | R/W | No | No | R/W | RWU | | Backblaze B2 | SHA1 | R/W | No | No | R/W | - | | Box | SHA1 | R/W | Yes | No | - | - | @@ -17561,7 +18920,7 @@ Here is an overview of the major features of each cloud storage system. | Enterprise File Fabric | - | R/W | Yes | No | R/W | - | | FTP | - | R/W ¹⁰ | No | No | - | - | | Google Cloud Storage | MD5 | R/W | No | No | R/W | - | -| Google Drive | MD5, SHA1, SHA256 | R/W | No | Yes | R/W | - | +| Google Drive | MD5, SHA1, SHA256 | DR/W | No | Yes | R/W | DRWU | | Google Photos | - | - | No | Yes | R | - | | HDFS | - | R/W | No | No | - | - | | HiDrive | HiDrive ¹² | R/W | No | No | - | - | @@ -17575,7 +18934,7 @@ Here is an overview of the major features of each cloud storage system. | Memory | MD5 | R/W | No | No | - | - | | Microsoft Azure Blob Storage | MD5 | R/W | No | No | R/W | - | | Microsoft Azure Files Storage | MD5 | R/W | Yes | No | R/W | - | -| Microsoft OneDrive | QuickXorHash ⁵ | R/W | Yes | No | R | - | +| Microsoft OneDrive | QuickXorHash ⁵ | DR/W | Yes | No | R | DRW | | OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - | | OpenStack Swift | MD5 | R/W | No | No | R/W | - | | Oracle Object Storage | MD5 | R/W | No | No | R/W | - | @@ -17587,7 +18946,7 @@ Here is an overview of the major features of each cloud storage system. | QingStor | MD5 | - ⁹ | No | No | R/W | - | | Quatrix by Maytech | - | R/W | No | No | - | - | | Seafile | - | - | No | No | - | - | -| SFTP | MD5, SHA1 ² | R/W | Depends | No | - | - | +| SFTP | MD5, SHA1 ² | DR/W | Depends | No | - | - | | Sia | - | - | No | No | - | - | | SMB | - | R/W | Yes | No | - | - | | SugarSync | - | - | No | No | - | - | @@ -17596,7 +18955,7 @@ Here is an overview of the major features of each cloud storage system. | WebDAV | MD5, SHA1 ³ | R ⁴ | Depends | No | - | - | | Yandex Disk | MD5 | R/W | No | No | R | - | | Zoho WorkDrive | - | - | No | No | - | - | -| The local filesystem | All | R/W | Depends | No | - | RWU | +| The local filesystem | All | DR/W | Depends | No | - | DRWU | ¹ Dropbox supports [its own custom hash](https://www.dropbox.com/developers/reference/content-hash). @@ -17650,13 +19009,21 @@ systems they must support a common hash type. Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp -that represent the time of the upload. To be relevant for syncing +that represents the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the `--checksum` flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it. +| Key | Explanation | +|-----|-------------| +| `-` | ModTimes not supported - times likely the upload time | +| `R` | ModTimes supported on files but can't be changed without re-upload | +| `R/W` | Read and Write ModTimes fully supported on files | +| `DR` | ModTimes supported on files and directories but can't be changed without re-upload | +| `DR/W` | Read and Write ModTimes fully supported on files and directories | + Storage systems with a `-` in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. It is most likely the time the file was uploaded, @@ -17678,6 +19045,9 @@ in a `mount` will be silently ignored. Storage systems with `R/W` (for read/write) in the ModTime column, means they do also support modtime-only operations. +Storage systems with `D` in the ModTime column means that the +following symbols apply to directories as well as files. + ### Case Insensitive ### If a cloud storage systems is case sensitive then it is possible to @@ -17990,9 +19360,12 @@ The levels of metadata support are | Key | Explanation | |-----|-------------| -| `R` | Read only System Metadata | -| `RW` | Read and write System Metadata | -| `RWU` | Read and write System Metadata and read and write User Metadata | +| `R` | Read only System Metadata on files only| +| `RW` | Read and write System Metadata on files only| +| `RWU` | Read and write System Metadata and read and write User Metadata on files only| +| `DR` | Read only System Metadata on files and directories | +| `DRW` | Read and write System Metadata on files and directories| +| `DRWU` | Read and write System Metadata and read and write User Metadata on files and directories | See [the metadata docs](https://rclone.org/docs/#metadata) for more info. @@ -18005,7 +19378,6 @@ upon backend-specific capabilities. | ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------------|:------------:|:-----:|:--------:| | 1Fichier | No | Yes | Yes | No | No | No | No | No | Yes | No | Yes | | Akamai Netstorage | Yes | No | No | No | No | Yes | Yes | No | No | No | Yes | -| Amazon Drive | Yes | No | Yes | Yes | No | No | No | No | No | No | Yes | | Amazon S3 (or S3 compatible) | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | | Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | | Box | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes | @@ -18019,6 +19391,7 @@ upon backend-specific capabilities. | HDFS | Yes | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes | | HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | No | Yes | | HTTP | No | No | No | No | No | No | No | No | No | No | Yes | +| ImageKit | Yes | Yes | Yes | No | No | No | No | No | No | No | Yes | | Internet Archive | No | Yes | No | No | Yes | Yes | No | No | Yes | Yes | No | | Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | | Koofr | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes | @@ -18048,7 +19421,7 @@ upon backend-specific capabilities. | WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ³ | No | No | Yes | Yes | | Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes | | Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes | -| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes | +| The local filesystem | No | No | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes | ¹ Note Swift implements this in order to delete directory markers but it doesn't actually have a quicker way of deleting files other than @@ -18167,7 +19540,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -18181,6 +19554,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -18201,6 +19575,7 @@ Flags just used for `rclone sync`. --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -18256,7 +19631,7 @@ General networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0") ``` @@ -18440,14 +19815,7 @@ Flags to control the Remote Control API. Backend only flags. These can be set in the config file also. ``` - --acd-auth-url string Auth server URL - --acd-client-id string OAuth Client Id - --acd-client-secret string OAuth Client Secret - --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi) - --acd-token string OAuth Access Token as a JSON blob - --acd-token-url string Token server url - --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s) + --alias-description string Description of the remote --alias-remote string Remote or path to alias --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name @@ -18458,6 +19826,8 @@ Backend only flags. These can be set in the config file also. --azureblob-client-id string The ID of the client in use --azureblob-client-secret string One of the service principal's client secrets --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth + --azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion + --azureblob-description string Description of the remote --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-disable-checksum Don't store MD5 checksum with object metadata --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) @@ -18488,6 +19858,7 @@ Backend only flags. These can be set in the config file also. --azurefiles-client-secret string One of the service principal's client secrets --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth --azurefiles-connection-string string Azure Files Connection String + --azurefiles-description string Description of the remote --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot) --azurefiles-endpoint string Endpoint for the service --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI) @@ -18507,8 +19878,9 @@ Backend only flags. These can be set in the config file also. --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) + --b2-description string Description of the remote --b2-disable-checksum Disable checksums for large (> upload cutoff) files - --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w) + --b2-download-auth-duration Duration Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-url string Custom endpoint for downloads --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service @@ -18527,6 +19899,7 @@ Backend only flags. These can be set in the config file also. --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) + --box-description string Description of the remote --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) @@ -18543,6 +19916,7 @@ Backend only flags. These can be set in the config file also. --cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend") --cache-db-purge Clear all the cached data for this remote on start --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-description string Description of the remote --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) @@ -18556,15 +19930,19 @@ Backend only flags. These can be set in the config file also. --cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-writes Cache file data on writes through the FS --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi) + --chunker-description string Description of the remote --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk + --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining + --compress-description string Description of the remote --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress -L, --copy-links Follow symlinks and copy the pointed to item + --crypt-description string Description of the remote --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true) --crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32") --crypt-filename-encryption string How to encrypt the filenames (default "standard") @@ -18575,6 +19953,7 @@ Backend only flags. These can be set in the config file also. --crypt-remote string Remote to encrypt/decrypt --crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead --crypt-show-mapping For all files listed show how the names encrypt + --crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted --crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin") --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded --drive-allow-import-name-change Allow the filetype to change when uploading Google docs @@ -18584,6 +19963,7 @@ Backend only flags. These can be set in the config file also. --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut + --drive-description string Description of the remote --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) @@ -18632,6 +20012,7 @@ Backend only flags. These can be set in the config file also. --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret + --dropbox-description string Description of the remote --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) @@ -18641,10 +20022,12 @@ Backend only flags. These can be set in the config file also. --dropbox-token-url string Token server url --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-cdn Set if you wish to use CDN download links + --fichier-description string Description of the remote --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-shared-folder string If you want to download a shared folder, add this parameter + --filefabric-description string Description of the remote --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-permanent-token string Permanent Authentication Token --filefabric-root-folder-id string ID of the root folder @@ -18655,6 +20038,7 @@ Backend only flags. These can be set in the config file also. --ftp-ask-password Allow asking for FTP password when needed --ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s) --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited + --ftp-description string Description of the remote --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) @@ -18680,6 +20064,7 @@ Backend only flags. These can be set in the config file also. --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-decompress If set this will decompress gzip encoded objects + --gcs-description string Description of the remote --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-endpoint string Endpoint for the service @@ -18700,6 +20085,7 @@ Backend only flags. These can be set in the config file also. --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s) --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret + --gphotos-description string Description of the remote --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-include-archived Also view and download archived media --gphotos-read-only Set to make the Google Photos backend read only @@ -18708,10 +20094,12 @@ Backend only flags. These can be set in the config file also. --gphotos-token string OAuth Access Token as a JSON blob --gphotos-token-url string Token server url --hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default) + --hasher-description string Description of the remote --hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy + --hdfs-description string Description of the remote --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-namenode CommaSepList Hadoop name nodes and ports --hdfs-service-principal-name string Kerberos service principal name for the namenode @@ -18720,6 +20108,7 @@ Backend only flags. These can be set in the config file also. --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) --hidrive-client-id string OAuth Client Id --hidrive-client-secret string OAuth Client Secret + --hidrive-description string Description of the remote --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot) --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") @@ -18730,10 +20119,12 @@ Backend only flags. These can be set in the config file also. --hidrive-token-url string Token server url --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) + --http-description string Description of the remote --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / --http-url string URL of HTTP host to connect to + --imagekit-description string Description of the remote --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket) --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true @@ -18742,6 +20133,7 @@ Backend only flags. These can be set in the config file also. --imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2" --imagekit-versions Include old versions in directory listings --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-description string Description of the remote --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") @@ -18751,6 +20143,7 @@ Backend only flags. These can be set in the config file also. --jottacloud-auth-url string Auth server URL --jottacloud-client-id string OAuth Client Id --jottacloud-client-secret string OAuth Client Secret + --jottacloud-description string Description of the remote --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -18759,6 +20152,7 @@ Backend only flags. These can be set in the config file also. --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) + --koofr-description string Description of the remote --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use --koofr-mountid string Mount ID of the mount to use @@ -18766,10 +20160,12 @@ Backend only flags. These can be set in the config file also. --koofr-provider string Choose your storage provider --koofr-setmtime Does the backend support setting modification time (default true) --koofr-user string Your user name + --linkbox-description string Description of the remote --linkbox-token string Token from https://www.linkbox.to/admin/account -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive + --local-description string Description of the remote --local-encoding Encoding The encoding for the backend (default Slash,Dot) --local-no-check-updated Don't check to see if the files change during upload --local-no-preallocate Disable preallocation of disk space for transferred files @@ -18782,6 +20178,7 @@ Backend only flags. These can be set in the config file also. --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-client-id string OAuth Client Id --mailru-client-secret string OAuth Client Secret + --mailru-description string Description of the remote --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) @@ -18792,12 +20189,15 @@ Backend only flags. These can be set in the config file also. --mailru-token-url string Token server url --mailru-user string User name (usually email) --mega-debug Output more debug from Mega + --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash --mega-pass string Password (obscured) --mega-use-https Use HTTPS for transfers --mega-user string User name + --memory-description string Description of the remote --netstorage-account string Set the NetStorage account name + --netstorage-description string Description of the remote --netstorage-host string Domain+path of NetStorage host to connect to --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) @@ -18809,6 +20209,7 @@ Backend only flags. These can be set in the config file also. --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-delta If set rclone will use delta listing to implement recursive listings + --onedrive-description string Description of the remote --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -18818,6 +20219,7 @@ Backend only flags. These can be set in the config file also. --onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous") --onedrive-link-type string Set the type of the links created by the link command (default "view") --onedrive-list-chunk int Size of listing chunk (default 1000) + --onedrive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off) --onedrive-no-versions Remove all versions on modifying operations --onedrive-region string Choose national cloud region for OneDrive (default "global") --onedrive-root-folder-id string ID of the root folder @@ -18831,6 +20233,7 @@ Backend only flags. These can be set in the config file also. --oos-config-profile string Profile name inside the oci config file (default "Default") --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-timeout Duration Timeout for copy (default 1m0s) + --oos-description string Description of the remote --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API @@ -18849,12 +20252,14 @@ Backend only flags. These can be set in the config file also. --oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) + --opendrive-description string Description of the remote --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password (obscured) --opendrive-username string Username --pcloud-auth-url string Auth server URL --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret + --pcloud-description string Description of the remote --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") --pcloud-password string Your pcloud password (obscured) @@ -18865,6 +20270,7 @@ Backend only flags. These can be set in the config file also. --pikpak-auth-url string Auth server URL --pikpak-client-id string OAuth Client Id --pikpak-client-secret string OAuth Client Secret + --pikpak-description string Description of the remote --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-pass string Pikpak password (obscured) @@ -18877,11 +20283,13 @@ Backend only flags. These can be set in the config file also. --premiumizeme-auth-url string Auth server URL --premiumizeme-client-id string OAuth Client Id --premiumizeme-client-secret string OAuth Client Secret + --premiumizeme-description string Description of the remote --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token-url string Token server url --protondrive-2fa string The 2FA code --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") + --protondrive-description string Description of the remote --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) @@ -18892,12 +20300,14 @@ Backend only flags. These can be set in the config file also. --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret + --putio-description string Description of the remote --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-token string OAuth Access Token as a JSON blob --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) + --qingstor-description string Description of the remote --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-env-auth Get QingStor credentials from runtime @@ -18906,18 +20316,21 @@ Backend only flags. These can be set in the config file also. --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to --quatrix-api-key string API key for accessing Quatrix account + --quatrix-description string Description of the remote --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-host string Host name of Quatrix account --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) + --quatrix-skip-project-folders Skip project folders in operations --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --s3-decompress If set this will decompress gzip encoded objects + --s3-description string Description of the remote --s3-directory-markers Upload an empty object with a trailing slash when a new directory is created --s3-disable-checksum Don't store MD5 checksum with object metadata --s3-disable-http2 Disable usage of http2 for S3 backends @@ -18952,19 +20365,22 @@ Backend only flags. These can be set in the config file also. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key --s3-storage-class string The storage class to use when storing new objects in S3 --s3-sts-endpoint string Endpoint for STS - --s3-upload-concurrency int Concurrency for multipart uploads (default 4) + --s3-upload-concurrency int Concurrency for multipart uploads and copies (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) + --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --s3-version-at Time Show file versions as they were at the specified time (default off) + --s3-version-deleted Show deleted file markers when using versions --s3-versions Include old versions in directory listings --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist + --seafile-description string Description of the remote --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library --seafile-library-key string Library password (for encrypted libraries only) (obscured) @@ -18976,6 +20392,7 @@ Backend only flags. These can be set in the config file also. --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-copy-is-hardlink Set to enable server side copies using hardlinks + --sftp-description string Description of the remote --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -19010,6 +20427,7 @@ Backend only flags. These can be set in the config file also. --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-client-id string OAuth Client Id --sharefile-client-secret string OAuth Client Secret + --sharefile-description string Description of the remote --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder @@ -19018,10 +20436,12 @@ Backend only flags. These can be set in the config file also. --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") + --sia-description string Description of the remote --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --skip-links Don't warn about skipped symlinks --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) + --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true) @@ -19033,6 +20453,7 @@ Backend only flags. These can be set in the config file also. --smb-user string SMB username (default "$USER") --storj-access-grant string Access grant --storj-api-key string API key + --storj-description string Description of the remote --storj-passphrase string Encryption passphrase --storj-provider string Choose an authentication method (default "existing") --storj-satellite-address string Satellite address (default "us1.storj.io") @@ -19041,6 +20462,7 @@ Backend only flags. These can be set in the config file also. --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id + --sugarsync-description string Description of the remote --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key @@ -19054,6 +20476,7 @@ Backend only flags. These can be set in the config file also. --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) + --swift-description string Description of the remote --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") @@ -19073,17 +20496,21 @@ Backend only flags. These can be set in the config file also. --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") + --union-description string Description of the remote --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token + --uptobox-description string Description of the remote --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-private Set to make uploaded files private --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token + --webdav-description string Description of the remote --webdav-encoding string The encoding for the backend --webdav-headers CommaSepList Set HTTP headers for all transactions --webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi) + --webdav-owncloud-exclude-shares Exclude ownCloud shares --webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to @@ -19092,6 +20519,7 @@ Backend only flags. These can be set in the config file also. --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret + --yandex-description string Description of the remote --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-token string OAuth Access Token as a JSON blob @@ -19099,6 +20527,7 @@ Backend only flags. These can be set in the config file also. --zoho-auth-url string Auth server URL --zoho-client-id string OAuth Client Id --zoho-client-secret string OAuth Client Secret + --zoho-description string Description of the remote --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-region string Zoho region to connect to --zoho-token string OAuth Access Token as a JSON blob @@ -19661,23 +21090,34 @@ docker volume inspect my_vol If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first. +## Bisync +`bisync` is **in beta** and is considered an **advanced command**, so use with care. +Make sure you have read and understood the entire [manual](https://rclone.org/bisync) (especially the [Limitations](#limitations) section) before using, or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/). + ## Getting started {#getting-started} - [Install rclone](https://rclone.org/install/) and setup your remotes. - Bisync will create its working directory - at `~/.cache/rclone/bisync` on Linux + at `~/.cache/rclone/bisync` on Linux, `/Users/yourusername/Library/Caches/rclone/bisync` on Mac, or `C:\Users\MyLogin\AppData\Local\rclone\bisync` on Windows. Make sure that this location is writable. - Run bisync with the `--resync` flag, specifying the paths to the local and remote sync directory roots. -- For successive sync runs, leave off the `--resync` flag. +- For successive sync runs, leave off the `--resync` flag. (**Important!**) - Consider using a [filters file](#filtering) for excluding unnecessary files and directories from the sync. - Consider setting up the [--check-access](#check-access) feature for safety. -- On Linux, consider setting up a [crontab entry](#cron). bisync can +- On Linux or Mac, consider setting up a [crontab entry](#cron). bisync can safely run in concurrent cron jobs thanks to lock files it maintains. +For example, your first command might look like this: + +``` +rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run +``` +If all looks good, run it again without `--dry-run`. After that, remove `--resync` as well. + Here is a typical run log (with timestamps removed for clarity): ``` @@ -19736,36 +21176,36 @@ Positional arguments: Type 'rclone listremotes' for list of configured remotes. Optional Flags: - --check-access Ensure expected `RCLONE_TEST` files are found on - both Path1 and Path2 filesystems, else abort. - --check-filename FILENAME Filename for `--check-access` (default: `RCLONE_TEST`) - --check-sync CHOICE Controls comparison of final listings: - `true | false | only` (default: true) - If set to `only`, bisync will only compare listings - from the last run but skip actual sync. - --filters-file PATH Read filtering patterns from a file - --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. - If exceeded, the bisync run will abort. (default: 50%) - --force Bypass `--max-delete` safety check and run the sync. - Consider using with `--verbose` - --create-empty-src-dirs Sync creation and deletion of empty directories. - (Not compatible with --remove-empty-dirs) - --remove-empty-dirs Remove empty directories at the final cleanup step. - -1, --resync Performs the resync run. - Warning: Path1 files may overwrite Path2 versions. - Consider using `--verbose` or `--dry-run` first. - --ignore-listing-checksum Do not use checksums for listings - (add --ignore-checksum to additionally skip post-copy checksum checks) - --resilient Allow future runs to retry after certain less-serious errors, - instead of requiring --resync. Use at your own risk! - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --workdir PATH Use custom working directory (useful for testing). - (default: `~/.cache/rclone/bisync`) - -n, --dry-run Go through the motions - No files are copied/deleted. - -v, --verbose Increases logging verbosity. - May be specified more than once for more details. - -h, --help help for bisync + --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote. + --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote. + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --compare string Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime') + --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num) + --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none") + --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict') + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --no-slow-hash Ignore listing checksums only on backends where they are slow + --recover Automatically recover from interruptions without requiring --resync. + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first. + --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none") + --retries int Retry operations this many times if they fail (requires --resilient). (default 3) + --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) + --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls. + --workdir string Use custom working dir - useful for testing. (default: {WORKDIR}) + --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%) + -n, --dry-run Go through the motions - No files are copied/deleted. + -v, --verbose Increases logging verbosity. May be specified more than once for more details. ``` Arbitrary rclone flags may be specified on the @@ -19799,22 +21239,16 @@ as the last step in the process. ## Command-line flags -#### --resync +### --resync This will effectively make both Path1 and Path2 filesystems contain a -matching superset of all files. Path2 files that do not exist in Path1 will +matching superset of all files. By default, Path2 files that do not exist in Path1 will be copied to Path1, and the process will then copy the Path1 tree to Path2. -The `--resync` sequence is roughly equivalent to: +The `--resync` sequence is roughly equivalent to the following (but see [`--resync-mode`](#resync-mode) for other options): ``` -rclone copy Path2 Path1 --ignore-existing -rclone copy Path1 Path2 -``` -Or, if using `--create-empty-src-dirs`: -``` -rclone copy Path2 Path1 --ignore-existing -rclone copy Path1 Path2 --create-empty-src-dirs -rclone copy Path2 Path1 --create-empty-src-dirs +rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs] +rclone copy Path1 Path2 [--create-empty-src-dirs] ``` The base directories on both Path1 and Path2 filesystems must exist @@ -19822,13 +21256,10 @@ or bisync will fail. This is required for safety - that bisync can verify that both paths are valid. When using `--resync`, a newer version of a file on the Path2 filesystem -will be overwritten by the Path1 filesystem version. -(Note that this is [NOT entirely symmetrical](https://github.com/rclone/rclone/issues/5681#issuecomment-938761815).) +will (by default) be overwritten by the Path1 filesystem version. +(Note that this is [NOT entirely symmetrical](https://github.com/rclone/rclone/issues/5681#issuecomment-938761815), and more symmetrical options can be specified with the [`--resync-mode`](#resync-mode) flag.) Carefully evaluate deltas using [--dry-run](https://rclone.org/flags/#non-backend-flags). -[//]: # (I reverted a recent change in the above paragraph, as it was incorrect. -https://github.com/rclone/rclone/commit/dd72aff98a46c6e20848ac7ae5f7b19d45802493 ) - For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a normal non-resync run will fail. @@ -19838,7 +21269,100 @@ For a non-resync run, either path being empty (no files in the tree) fails with This is a safety check that an unexpected empty path does not result in deleting **everything** in the other path. -#### --check-access +Note that `--resync` implies `--resync-mode path1` unless a different +[`--resync-mode`](#resync-mode) is explicitly specified. +It is not necessary to use both the `--resync` and `--resync-mode` flags -- +either one is sufficient without the other. + +**Note:** `--resync` (including `--resync-mode`) should only be used under three specific (rare) circumstances: +1. It is your _first_ bisync run (between these two paths) +2. You've just made changes to your bisync settings (such as editing the contents of your `--filters-file`) +3. There was an error on the prior run, and as a result, bisync now requires `--resync` to recover + +The rest of the time, you should _omit_ `--resync`. The reason is because `--resync` will only _copy_ (not _sync_) each side to the other. +Therefore, if you included `--resync` for every bisync run, it would never be possible to delete a file -- +the deleted file would always keep reappearing at the end of every run (because it's being copied from the other side where it still exists). +Similarly, renaming a file would always result in a duplicate copy (both old and new name) on both sides. + +If you find that frequent interruptions from #3 are an issue, rather than +automatically running `--resync`, the recommended alternative is to use the +[`--resilient`](#resilient), [`--recover`](#recover), and +[`--conflict-resolve`](#conflict-resolve) flags, (along with [Graceful +Shutdown](#graceful-shutdown) mode, when needed) for a very robust +"set-it-and-forget-it" bisync setup that can automatically bounce back from +almost any interruption it might encounter. Consider adding something like the +following: + +``` +--resilient --recover --max-lock 2m --conflict-resolve newer +``` + +### --resync-mode CHOICE {#resync-mode} + +In the event that a file differs on both sides during a `--resync`, +`--resync-mode` controls which version will overwrite the other. The supported +options are similar to [`--conflict-resolve`](#conflict-resolve). For all of +the following options, the version that is kept is referred to as the "winner", +and the version that is overwritten (deleted) is referred to as the "loser". +The options are named after the "winner": + +- `path1` - (the default) - the version from Path1 is unconditionally +considered the winner (regardless of `modtime` and `size`, if any). This can be +useful if one side is more trusted or up-to-date than the other, at the time of +the `--resync`. +- `path2` - same as `path1`, except the path2 version is considered the winner. +- `newer` - the newer file (by `modtime`) is considered the winner, regardless +of which side it came from. This may result in having a mix of some winners +from Path1, and some winners from Path2. (The implementation is analogous to +running `rclone copy --update` in both directions.) +- `older` - same as `newer`, except the older file is considered the winner, +and the newer file is considered the loser. +- `larger` - the larger file (by `size`) is considered the winner (regardless +of `modtime`, if any). This can be a useful option for remotes without +`modtime` support, or with the kinds of files (such as logs) that tend to grow +but not shrink, over time. +- `smaller` - the smaller file (by `size`) is considered the winner (regardless +of `modtime`, if any). + +For all of the above options, note the following: +- If either of the underlying remotes lacks support for the chosen method, it +will be ignored and will fall back to the default of `path1`. (For example, if +`--resync-mode newer` is set, but one of the paths uses a remote that doesn't +support `modtime`.) +- If a winner can't be determined because the chosen method's attribute is +missing or equal, it will be ignored, and bisync will instead try to determine +whether the files differ by looking at the other `--compare` methods in effect. +(For example, if `--resync-mode newer` is set, but the Path1 and Path2 modtimes +are identical, bisync will compare the sizes.) If bisync concludes that they +differ, preference is given to whichever is the "source" at that moment. (In +practice, this gives a slight advantage to Path2, as the 2to1 copy comes before +the 1to2 copy.) If the files _do not_ differ, nothing is copied (as both sides +are already correct). +- These options apply only to files that exist on both sides (with the same +name and relative path). Files that exist *only* on one side and not the other +are *always* copied to the other, during `--resync` (this is one of the main +differences between resync and non-resync runs.). +- `--conflict-resolve`, `--conflict-loser`, and `--conflict-suffix` do not +apply during `--resync`, and unlike these flags, nothing is renamed during +`--resync`. When a file differs on both sides during `--resync`, one version +always overwrites the other (much like in `rclone copy`.) (Consider using +[`--backup-dir`](#backup-dir1-and-backup-dir2) to retain a backup of the losing +version.) +- Unlike for `--conflict-resolve`, `--resync-mode none` is not a valid option +(or rather, it will be interpreted as "no resync", unless `--resync` has also +been specified, in which case it will be ignored.) +- Winners and losers are decided at the individual file-level only (there is +not currently an option to pick an entire winning directory atomically, +although the `path1` and `path2` options typically produce a similar result.) +- To maintain backward-compatibility, the `--resync` flag implies +`--resync-mode path1` unless a different `--resync-mode` is explicitly +specified. Similarly, all `--resync-mode` options (except `none`) imply +`--resync`, so it is not necessary to use both the `--resync` and +`--resync-mode` flags simultaneously -- either one is sufficient without the +other. + + +### --check-access Access check files are an additional safety measure against data loss. bisync will ensure it can find matching `RCLONE_TEST` files in the same places @@ -19867,7 +21391,7 @@ bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. See also the [--check-filename](--check-filename) flag. -#### --check-filename +### --check-filename Name of the file(s) used in access health validation. The default `--check-filename` is `RCLONE_TEST`. @@ -19875,7 +21399,154 @@ One or more files having this filename must exist, synchronized between your source and destination filesets, in order for `--check-access` to succeed. See [--check-access](#check-access) for additional details. -#### --max-delete +### --compare + +As of `v1.66`, bisync fully supports comparing based on any combination of +size, modtime, and checksum (lifting the prior restriction on backends without +modtime support.) + +By default (without the `--compare` flag), bisync inherits the same comparison +options as `sync` +(that is: `size` and `modtime` by default, unless modified with flags such as +[`--checksum`](https://rclone.org/docs/#c-checksum) or [`--size-only`](/docs/#size-only).) + +If the `--compare` flag is set, it will override these defaults. This can be +useful if you wish to compare based on combinations not currently supported in +`sync`, such as comparing all three of `size` AND `modtime` AND `checksum` +simultaneously (or just `modtime` AND `checksum`). + +`--compare` takes a comma-separated list, with the currently supported values +being `size`, `modtime`, and `checksum`. For example, if you want to compare +size and checksum, but not modtime, you would do: +``` +--compare size,checksum +``` + +Or if you want to compare all three: +``` +--compare size,modtime,checksum +``` + +`--compare` overrides any conflicting flags. For example, if you set the +conflicting flags `--compare checksum --size-only`, `--size-only` will be +ignored, and bisync will compare checksum and not size. To avoid confusion, it +is recommended to use _either_ `--compare` or the normal `sync` flags, but not +both. + +If `--compare` includes `checksum` and both remotes support checksums but have +no hash types in common with each other, checksums will be considered _only_ +for comparisons within the same side (to determine what has changed since the +prior sync), but not for comparisons against the opposite side. If one side +supports checksums and the other does not, checksums will only be considered on +the side that supports them. + +When comparing with `checksum` and/or `size` without `modtime`, bisync cannot +determine whether a file is `newer` or `older` -- only whether it is `changed` +or `unchanged`. (If it is `changed` on both sides, bisync still does the +standard equality-check to avoid declaring a sync conflict unless it absolutely +has to.) + +It is recommended to do a `--resync` when changing `--compare` settings, as +otherwise your prior listing files may not contain the attributes you wish to +compare (for example, they will not have stored checksums if you were not +previously comparing checksums.) + +### --ignore-listing-checksum + +When `--checksum` or `--compare checksum` is set, bisync will retrieve (or +generate) checksums (for backends that support them) when creating the listings +for both paths, and store the checksums in the listing files. +`--ignore-listing-checksum` will disable this behavior, which may speed things +up considerably, especially on backends (such as [local](https://rclone.org/local/)) where hashes +must be computed on the fly instead of retrieved. Please note the following: + +* As of `v1.66`, `--ignore-listing-checksum` is now automatically set when +neither `--checksum` nor `--compare checksum` are in use (as the checksums +would not be used for anything.) +* `--ignore-listing-checksum` is NOT the same as +[`--ignore-checksum`](https://rclone.org/docs/#ignore-checksum), +and you may wish to use one or the other, or both. In a nutshell: +`--ignore-listing-checksum` controls whether checksums are considered when +scanning for diffs, +while `--ignore-checksum` controls whether checksums are considered during the +copy/sync operations that follow, +if there ARE diffs. +* Unless `--ignore-listing-checksum` is passed, bisync currently computes +hashes for one path +*even when there's no common hash with the other path* +(for example, a [crypt](https://rclone.org/crypt/#modification-times-and-hashes) remote.) +This can still be beneficial, as the hashes will still be used to detect +changes within the same side +(if `--checksum` or `--compare checksum` is set), even if they can't be used to +compare against the opposite side. +* If you wish to ignore listing checksums _only_ on remotes where they are slow +to compute, consider using +[`--no-slow-hash`](#no-slow-hash) (or +[`--slow-hash-sync-only`](#slow-hash-sync-only)) instead of +`--ignore-listing-checksum`. +* If `--ignore-listing-checksum` is used simultaneously with `--compare +checksum` (or `--checksum`), checksums will be ignored for bisync deltas, +but still considered during the sync operations that follow (if deltas are +detected based on modtime and/or size.) + +### --no-slow-hash + +On some remotes (notably `local`), checksums can dramatically slow down a +bisync run, because hashes cannot be stored and need to be computed in +real-time when they are requested. On other remotes (such as `drive`), they add +practically no time at all. The `--no-slow-hash` flag will automatically skip +checksums on remotes where they are slow, while still comparing them on others +(assuming [`--compare`](#compare) includes `checksum`.) This can be useful when one of your +bisync paths is slow but you still want to check checksums on the other, for a more +robust sync. + +### --slow-hash-sync-only + +Same as [`--no-slow-hash`](#no-slow-hash), except slow hashes are still +considered during sync calls. They are still NOT considered for determining +deltas, nor or they included in listings. They are also skipped during +`--resync`. The main use case for this flag is when you have a large number of +files, but relatively few of them change from run to run -- so you don't want +to check your entire tree every time (it would take too long), but you still +want to consider checksums for the smaller group of files for which a `modtime` +or `size` change was detected. Keep in mind that this speed savings comes with +a safety trade-off: if a file's content were to change without a change to its +`modtime` or `size`, bisync would not detect it, and it would not be synced. + +`--slow-hash-sync-only` is only useful if both remotes share a common hash +type (if they don't, bisync will automatically fall back to `--no-slow-hash`.) +Both `--no-slow-hash` and `--slow-hash-sync-only` have no effect without +`--compare checksum` (or `--checksum`). + +### --download-hash + +If `--download-hash` is set, bisync will use best efforts to obtain an MD5 +checksum by downloading and computing on-the-fly, when checksums are not +otherwise available (for example, a remote that doesn't support them.) Note +that since rclone has to download the entire file, this may dramatically slow +down your bisync runs, and is also likely to use a lot of data, so it is +probably not practical for bisync paths with a large total file size. However, +it can be a good option for syncing small-but-important files with maximum +accuracy (for example, a source code repo on a `crypt` remote.) An additional +advantage over methods like [`cryptcheck`](https://rclone.org/commands/rclone_cryptcheck/) is +that the original file is not required for comparison (for example, +`--download-hash` can be used to bisync two different crypt remotes with +different passwords.) + +When `--download-hash` is set, bisync still looks for more efficient checksums +first, and falls back to downloading only when none are found. It takes +priority over conflicting flags such as `--no-slow-hash`. `--download-hash` is +not suitable for [Google Docs](#gdocs) and other files of unknown size, as +their checksums would change from run to run (due to small variances in the +internals of the generated export file.) Therefore, bisync automatically skips +`--download-hash` for files with a size less than 0. + +See also: [`Hasher`](https://rclone.org/hasher/) backend, +[`cryptcheck`](https://rclone.org/commands/rclone_cryptcheck/) command, [`rclone check +--download`](https://rclone.org/commands/rclone_check/) option, +[`md5sum`](https://rclone.org/commands/rclone_md5sum/) command + +### --max-delete As a safety check, if greater than the `--max-delete` percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with @@ -19893,7 +21564,7 @@ to bypass the check. Also see the [all files changed](#all-files-changed) check. -#### --filters-file {#filters-file} +### --filters-file {#filters-file} By using rclone filter features you can exclude file types or directory sub-trees from the sync. @@ -19917,7 +21588,153 @@ of the current filters file and compares it to the hash stored in the `.md5` fil If they don't match, the run aborts with a critical error and thus forces you to do a `--resync`, likely avoiding a disaster. -#### --check-sync +### --conflict-resolve CHOICE {#conflict-resolve} + +In bisync, a "conflict" is a file that is *new* or *changed* on *both sides* +(relative to the prior run) AND is *not currently identical* on both sides. +`--conflict-resolve` controls how bisync handles such a scenario. The currently +supported options are: + +- `none` - (the default) - do not attempt to pick a winner, keep and rename +both files according to [`--conflict-loser`](#conflict-loser) and +[`--conflict-suffix`](#conflict-suffix) settings. For example, with the default +settings, `file.txt` on Path1 is renamed `file.txt.conflict1` and `file.txt` on +Path2 is renamed `file.txt.conflict2`. Both are copied to the opposite path +during the run, so both sides end up with a copy of both files. (As `none` is +the default, it is not necessary to specify `--conflict-resolve none` -- you +can just omit the flag.) +- `newer` - the newer file (by `modtime`) is considered the winner and is +copied without renaming. The older file (the "loser") is handled according to +`--conflict-loser` and `--conflict-suffix` settings (either renamed or +deleted.) For example, if `file.txt` on Path1 is newer than `file.txt` on +Path2, the result on both sides (with other default settings) will be `file.txt` +(winner from Path1) and `file.txt.conflict1` (loser from Path2). +- `older` - same as `newer`, except the older file is considered the winner, +and the newer file is considered the loser. +- `larger` - the larger file (by `size`) is considered the winner (regardless +of `modtime`, if any). +- `smaller` - the smaller file (by `size`) is considered the winner (regardless +of `modtime`, if any). +- `path1` - the version from Path1 is unconditionally considered the winner +(regardless of `modtime` and `size`, if any). This can be useful if one side is +usually more trusted or up-to-date than the other. +- `path2` - same as `path1`, except the path2 version is considered the +winner. + +For all of the above options, note the following: +- If either of the underlying remotes lacks support for the chosen method, it +will be ignored and fall back to `none`. (For example, if `--conflict-resolve +newer` is set, but one of the paths uses a remote that doesn't support +`modtime`.) +- If a winner can't be determined because the chosen method's attribute is +missing or equal, it will be ignored and fall back to `none`. (For example, if +`--conflict-resolve newer` is set, but the Path1 and Path2 modtimes are +identical, even if the sizes may differ.) +- If the file's content is currently identical on both sides, it is not +considered a "conflict", even if new or changed on both sides since the prior +sync. (For example, if you made a change on one side and then synced it to the +other side by other means.) Therefore, none of the conflict resolution flags +apply in this scenario. +- The conflict resolution flags do not apply during a `--resync`, as there is +no "prior run" to speak of (but see [`--resync-mode`](#resync-mode) for similar +options.) + +### --conflict-loser CHOICE {#conflict-loser} + +`--conflict-loser` determines what happens to the "loser" of a sync conflict +(when [`--conflict-resolve`](#conflict-resolve) determines a winner) or to both +files (when there is no winner.) The currently supported options are: + +- `num` - (the default) - auto-number the conflicts by automatically appending +the next available number to the `--conflict-suffix`, in chronological order. +For example, with the default settings, the first conflict for `file.txt` will +be renamed `file.txt.conflict1`. If `file.txt.conflict1` already exists, +`file.txt.conflict2` will be used instead (etc., up to a maximum of +9223372036854775807 conflicts.) +- `pathname` - rename the conflicts according to which side they came from, +which was the default behavior prior to `v1.66`. For example, with +`--conflict-suffix path`, `file.txt` from Path1 will be renamed +`file.txt.path1`, and `file.txt` from Path2 will be renamed `file.txt.path2`. +If two non-identical suffixes are provided (ex. `--conflict-suffix +cloud,local`), the trailing digit is omitted. Importantly, note that with +`pathname`, there is no auto-numbering beyond `2`, so if `file.txt.path2` +somehow already exists, it will be overwritten. Using a dynamic date variable +in your `--conflict-suffix` (see below) is one possible way to avoid this. Note +also that conflicts-of-conflicts are possible, if the original conflict is not +manually resolved -- for example, if for some reason you edited +`file.txt.path1` on both sides, and those edits were different, the result +would be `file.txt.path1.path1` and `file.txt.path1.path2` (in addition to +`file.txt.path2`.) +- `delete` - keep the winner only and delete the loser, instead of renaming it. +If a winner cannot be determined (see `--conflict-resolve` for details on how +this could happen), `delete` is ignored and the default `num` is used instead +(i.e. both versions are kept and renamed, and neither is deleted.) `delete` is +inherently the most destructive option, so use it only with care. + +For all of the above options, note that if a winner cannot be determined (see +`--conflict-resolve` for details on how this could happen), or if +`--conflict-resolve` is not in use, *both* files will be renamed. + +### --conflict-suffix STRING[,STRING] {#conflict-suffix} + +`--conflict-suffix` controls the suffix that is appended when bisync renames a +[`--conflict-loser`](#conflict-loser) (default: `conflict`). +`--conflict-suffix` will accept either one string or two comma-separated +strings to assign different suffixes to Path1 vs. Path2. This may be helpful +later in identifying the source of the conflict. (For example, +`--conflict-suffix dropboxconflict,laptopconflict`) + +With `--conflict-loser num`, a number is always appended to the suffix. With +`--conflict-loser pathname`, a number is appended only when one suffix is +specified (or when two identical suffixes are specified.) i.e. with +`--conflict-loser pathname`, all of the following would produce exactly the +same result: + +``` +--conflict-suffix path +--conflict-suffix path,path +--conflict-suffix path1,path2 +``` + +Suffixes may be as short as 1 character. By default, the suffix is appended +after any other extensions (ex. `file.jpg.conflict1`), however, this can be +changed with the [`--suffix-keep-extension`](https://rclone.org/docs/#suffix-keep-extension) flag +(i.e. to instead result in `file.conflict1.jpg`). + +`--conflict-suffix` supports several *dynamic date variables* when enclosed in +curly braces as globs. This can be helpful to track the date and/or time that +each conflict was handled by bisync. For example: + +``` +--conflict-suffix {DateOnly}-conflict +// result: myfile.txt.2006-01-02-conflict1 +``` + +All of the formats described [here](https://pkg.go.dev/time#pkg-constants) and +[here](https://pkg.go.dev/time#example-Time.Format) are supported, but take +care to ensure that your chosen format does not use any characters that are +illegal on your remotes (for example, macOS does not allow colons in +filenames, and slashes are also best avoided as they are often interpreted as +directory separators.) To address this particular issue, an additional +`{MacFriendlyTime}` (or just `{mac}`) option is supported, which results in +`2006-01-02 0304PM`. + +Note that `--conflict-suffix` is entirely separate from rclone's main +[`--sufix`](https://rclone.org/docs/#suffix-suffix) flag. This is intentional, as users may wish +to use both flags simultaneously, if also using +[`--backup-dir`](#backup-dir1-and-backup-dir2). + +Finally, note that the default in bisync prior to `v1.66` was to rename +conflicts with `..path1` and `..path2` (with two periods, and `path` instead of +`conflict`.) Bisync now defaults to a single dot instead of a double dot, but +additional dots can be added by including them in the specified suffix string. +For example, for behavior equivalent to the previous default, use: + +``` +[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path +``` + +### --check-sync Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This _check-sync_ @@ -19934,59 +21751,183 @@ sync run times for very large numbers of files. The check may be run manually with `--check-sync=only`. It runs only the integrity check and terminates without actually synching. -See also: [Concurrent modifications](#concurrent-modifications) +Note that currently, `--check-sync` **only checks listing snapshots and NOT the +actual files on the remotes.** Note also that the listing snapshots will not +know about any changes that happened during or after the latest bisync run, as +those will be discovered on the next run. Therefore, while listings should +always match _each other_ at the end of a bisync run, it is _expected_ that +they will not match the underlying remotes, nor will the remotes match each +other, if there were changes during or after the run. This is normal, and any +differences will be detected and synced on the next run. +For a robust integrity check of the current state of the remotes (as opposed to just their listing snapshots), consider using [`check`](commands/rclone_check/) +(or [`cryptcheck`](https://rclone.org/commands/rclone_cryptcheck/), if at least one path is a `crypt` remote) instead of `--check-sync`, +keeping in mind that differences are expected if files changed during or after your last bisync run. -#### --ignore-listing-checksum +For example, a possible sequence could look like this: -By default, bisync will retrieve (or generate) checksums (for backends that support them) -when creating the listings for both paths, and store the checksums in the listing files. -`--ignore-listing-checksum` will disable this behavior, which may speed things up considerably, -especially on backends (such as [local](https://rclone.org/local/)) where hashes must be computed on the fly instead of retrieved. -Please note the following: +1. Normally scheduled bisync run: -* While checksums are (by default) generated and stored in the listing files, -they are NOT currently used for determining diffs (deltas). -It is anticipated that full checksum support will be added in a future version. -* `--ignore-listing-checksum` is NOT the same as [`--ignore-checksum`](https://rclone.org/docs/#ignore-checksum), -and you may wish to use one or the other, or both. In a nutshell: -`--ignore-listing-checksum` controls whether checksums are considered when scanning for diffs, -while `--ignore-checksum` controls whether checksums are considered during the copy/sync operations that follow, -if there ARE diffs. -* Unless `--ignore-listing-checksum` is passed, bisync currently computes hashes for one path -*even when there's no common hash with the other path* -(for example, a [crypt](https://rclone.org/crypt/#modification-times-and-hashes) remote.) -* If both paths support checksums and have a common hash, -AND `--ignore-listing-checksum` was not specified when creating the listings, -`--check-sync=only` can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.) -However, `--check-sync=only` will NOT include checksums if the previous listings -were generated on a run using `--ignore-listing-checksum`. For a more robust integrity check of the current state, -consider using [`check`](commands/rclone_check/) -(or [`cryptcheck`](https://rclone.org/commands/rclone_cryptcheck/), if at least one path is a `crypt` remote.) +``` +rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient +``` -#### --resilient +2. Periodic independent integrity check (perhaps scheduled nightly or weekly): + +``` +rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt +``` + +3. If diffs are found, you have some choices to correct them. +If one side is more up-to-date and you want to make the other side match it, you could run: + +``` +rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v +``` +(or switch Path1 and Path2 to make Path2 the source-of-truth) + +Or, if neither side is totally up-to-date, you could run a `--resync` to bring them back into agreement +(but remember that this could cause deleted files to re-appear.) + +*Note also that `rclone check` does not currently include empty directories, +so if you want to know if any empty directories are out of sync, +consider alternatively running the above `rclone sync` command with `--dry-run` added. + +See also: [Concurrent modifications](#concurrent-modifications), [`--resilient`](#resilient) + +### --resilient ***Caution: this is an experimental feature. Use at your own risk!*** -By default, most errors or interruptions will cause bisync to abort and -require [`--resync`](#resync) to recover. This is a safety feature, -to prevent bisync from running again until a user checks things out. -However, in some cases, bisync can go too far and enforce a lockout when one isn't actually necessary, -like for certain less-serious errors that might resolve themselves on the next run. -When `--resilient` is specified, bisync tries its best to recover and self-correct, -and only requires `--resync` as a last resort when a human's involvement is absolutely necessary. -The intended use case is for running bisync as a background process (such as via scheduled [cron](#cron)). +By default, most errors or interruptions will cause bisync to abort and +require [`--resync`](#resync) to recover. This is a safety feature, to prevent +bisync from running again until a user checks things out. However, in some +cases, bisync can go too far and enforce a lockout when one isn't actually +necessary, like for certain less-serious errors that might resolve themselves +on the next run. When `--resilient` is specified, bisync tries its best to +recover and self-correct, and only requires `--resync` as a last resort when a +human's involvement is absolutely necessary. The intended use case is for +running bisync as a background process (such as via scheduled [cron](#cron)). -When using `--resilient` mode, bisync will still report the error and abort, -however it will not lock out future runs -- allowing the possibility of retrying at the next normally scheduled time, -without requiring a `--resync` first. Examples of such retryable errors include -access test failures, missing listing files, and filter change detections. -These safety features will still prevent the *current* run from proceeding -- -the difference is that if conditions have improved by the time of the *next* run, -that next run will be allowed to proceed. -Certain more serious errors will still enforce a `--resync` lockout, even in `--resilient` mode, to prevent data loss. +When using `--resilient` mode, bisync will still report the error and abort, +however it will not lock out future runs -- allowing the possibility of +retrying at the next normally scheduled time, without requiring a `--resync` +first. Examples of such retryable errors include access test failures, missing +listing files, and filter change detections. These safety features will still +prevent the *current* run from proceeding -- the difference is that if +conditions have improved by the time of the *next* run, that next run will be +allowed to proceed. Certain more serious errors will still enforce a +`--resync` lockout, even in `--resilient` mode, to prevent data loss. -Behavior of `--resilient` may change in a future version. +Behavior of `--resilient` may change in a future version. (See also: +[`--recover`](#recover), [`--max-lock`](#max-lock), [Graceful +Shutdown](#graceful-shutdown)) + +### --recover + +If `--recover` is set, in the event of a sudden interruption or other +un-graceful shutdown, bisync will attempt to automatically recover on the next +run, instead of requiring `--resync`. Bisync is able to recover robustly by +keeping one "backup" listing at all times, representing the state of both paths +after the last known successful sync. Bisync can then compare the current state +with this snapshot to determine which changes it needs to retry. Changes that +were synced after this snapshot (during the run that was later interrupted) +will appear to bisync as if they are "new or changed on both sides", but in +most cases this is not a problem, as bisync will simply do its usual "equality +check" and learn that no action needs to be taken on these files, since they +are already identical on both sides. + +In the rare event that a file is synced successfully during a run that later +aborts, and then that same file changes AGAIN before the next run, bisync will +think it is a sync conflict, and handle it accordingly. (From bisync's +perspective, the file has changed on both sides since the last trusted sync, +and the files on either side are not currently identical.) Therefore, +`--recover` carries with it a slightly increased chance of having conflicts -- +though in practice this is pretty rare, as the conditions required to cause it +are quite specific. This risk can be reduced by using bisync's ["Graceful +Shutdown"](#graceful-shutdown) mode (triggered by sending `SIGINT` or +`Ctrl+C`), when you have the choice, instead of forcing a sudden termination. + +`--recover` and `--resilient` are similar, but distinct -- the main difference +is that `--resilient` is about _retrying_, while `--recover` is about +_recovering_. Most users will probably want both. `--resilient` allows retrying +when bisync has chosen to abort itself due to safety features such as failing +`--check-access` or detecting a filter change. `--resilient` does not cover +external interruptions such as a user shutting down their computer in the +middle of a sync -- that is what `--recover` is for. + +### --max-lock + +Bisync uses [lock files](#lock-file) as a safety feature to prevent +interference from other bisync runs while it is running. Bisync normally +removes these lock files at the end of a run, but if bisync is abruptly +interrupted, these files will be left behind. By default, they will lock out +all future runs, until the user has a chance to manually check things out and +remove the lock. As an alternative, `--max-lock` can be used to make them +automatically expire after a certain period of time, so that future runs are +not locked out forever, and auto-recovery is possible. `--max-lock` can be any +duration `2m` or greater (or `0` to disable). If set, lock files older than +this will be considered "expired", and future runs will be allowed to disregard +them and proceed. (Note that the `--max-lock` duration must be set by the +process that left the lock file -- not the later one interpreting it.) + +If set, bisync will also "renew" these lock files every `--max-lock minus one +minute` throughout a run, for extra safety. (For example, with `--max-lock 5m`, +bisync would renew the lock file (for another 5 minutes) every 4 minutes until +the run has completed.) In other words, it should not be possible for a lock +file to pass its expiration time while the process that created it is still +running -- and you can therefore be reasonably sure that any _expired_ lock +file you may find was left there by an interrupted run, not one that is still +running and just taking awhile. + +If `--max-lock` is `0` or not set, the default is that lock files will never +expire, and will block future runs (of these same two bisync paths) +indefinitely. + +For maximum resilience from disruptions, consider setting a relatively short +duration like `--max-lock 2m` along with [`--resilient`](#resilient) and +[`--recover`](#recover), and a relatively frequent [cron schedule](#cron). The +result will be a very robust "set-it-and-forget-it" bisync run that can +automatically bounce back from almost any interruption it might encounter, +without requiring the user to get involved and run a `--resync`. (See also: +[Graceful Shutdown](#graceful-shutdown) mode) + + +### --backup-dir1 and --backup-dir2 + +As of `v1.66`, [`--backup-dir`](https://rclone.org/docs/#backup-dir-dir) is supported in bisync. +Because `--backup-dir` must be a non-overlapping path on the same remote, +Bisync has introduced new `--backup-dir1` and `--backup-dir2` flags to support +separate backup-dirs for `Path1` and `Path2` (bisyncing between different +remotes with `--backup-dir` would not otherwise be possible.) `--backup-dir1` +and `--backup-dir2` can use different remotes from each other, but +`--backup-dir1` must use the same remote as `Path1`, and `--backup-dir2` must +use the same remote as `Path2`. Each backup directory must not overlap its +respective bisync Path without being excluded by a filter rule. + +The standard `--backup-dir` will also work, if both paths use the same remote +(but note that deleted files from both paths would be mixed together in the +same dir). If either `--backup-dir1` and `--backup-dir2` are set, they will +override `--backup-dir`. + +Example: +``` +rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case +``` + +In this example, if the user deletes a file in +`/Users/someuser/some/local/path/Bisync`, bisync will propagate the delete to +the other side by moving the corresponding file from `gdrive:Bisync` to +`gdrive:BackupDir`. If the user deletes a file from `gdrive:Bisync`, bisync +moves it from `/Users/someuser/some/local/path/Bisync` to +`/Users/someuser/some/local/path/BackupDir`. + +In the event of a [rename due to a sync conflict](#conflict-loser), the +rename is not considered a delete, unless a previous conflict with the same +name already exists and would get overwritten. + +See also: [`--suffix`](https://rclone.org/docs/#suffix-suffix), +[`--suffix-keep-extension`](https://rclone.org/docs/#suffix-keep-extension) ## Operation @@ -20005,7 +21946,8 @@ On each successive run it will: - Lock file prevents multiple simultaneous runs when taking a while. This can be particularly useful if bisync is run by cron scheduler. - Handle change conflicts non-destructively by creating - `..path1` and `..path2` file versions. + `.conflict1`, `.conflict2`, etc. file versions, according to + [`--conflict-resolve`](#conflict-resolve), [`--conflict-loser`](#conflict-loser), and [`--conflict-suffix`](#conflict-suffix) settings. - File system access health check using `RCLONE_TEST` files (see the `--check-access` flag). - Abort on excessive deletes - protects against a failed listing @@ -20032,8 +21974,8 @@ Path1 deleted | File no longer exists on Path1 | File is deleted Type | Description | Result | Implementation --------------------------------|---------------------------------------|------------------------------------|----------------------- Path1 new/changed AND Path2 new/changed AND Path1 == Path2 | File is new/changed on Path1 AND new/changed on Path2 AND Path1 version is currently identical to Path2 | No change | None -Path1 new AND Path2 new | File is new on Path1 AND new on Path2 (and Path1 version is NOT identical to Path2) | Files renamed to _Path1 and _Path2 | `rclone copy` _Path2 file to Path1, `rclone copy` _Path1 file to Path2 -Path2 newer AND Path1 changed | File is newer on Path2 AND also changed (newer/older/size) on Path1 (and Path1 version is NOT identical to Path2) | Files renamed to _Path1 and _Path2 | `rclone copy` _Path2 file to Path1, `rclone copy` _Path1 file to Path2 +Path1 new AND Path2 new | File is new on Path1 AND new on Path2 (and Path1 version is NOT identical to Path2) | Conflicts handled according to [`--conflict-resolve`](#conflict-resolve) & [`--conflict-loser`](#conflict-loser) settings | default: `rclone copy` renamed `Path2.conflict2` file to Path1, `rclone copy` renamed `Path1.conflict1` file to Path2 +Path2 newer AND Path1 changed | File is newer on Path2 AND also changed (newer/older/size) on Path1 (and Path1 version is NOT identical to Path2) | Conflicts handled according to [`--conflict-resolve`](#conflict-resolve) & [`--conflict-loser`](#conflict-loser) settings | default: `rclone copy` renamed `Path2.conflict2` file to Path1, `rclone copy` renamed `Path1.conflict1` file to Path2 Path2 newer AND Path1 deleted | File is newer on Path2 AND also deleted on Path1 | Path2 version survives | `rclone copy` Path2 to Path1 Path2 deleted AND Path1 changed | File is deleted on Path2 AND changed (newer/older/size) on Path1 | Path1 version survives |`rclone copy` Path1 to Path2 Path1 deleted AND Path2 changed | File is deleted on Path1 AND changed (newer/older/size) on Path2 | Path2 version survives | `rclone copy` Path2 to Path1 @@ -20044,7 +21986,7 @@ Now, when bisync comes to a file that it wants to rename (because it is new/chan it first checks whether the Path1 and Path2 versions are currently *identical* (using the same underlying function as [`check`](commands/rclone_check/).) If bisync concludes that the files are identical, it will skip them and move on. -Otherwise, it will create renamed `..Path1` and `..Path2` duplicates, as before. +Otherwise, it will create renamed duplicates, as before. This behavior also [improves the experience of renaming directories](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=Renamed%20directories), as a `--resync` is no longer required, so long as the same change has been made on both sides. @@ -20061,19 +22003,12 @@ before you commit to the changes. ### Modification times -Bisync relies on file timestamps to identify changed files and will -_refuse_ to operate if backend lacks the modification time support. - +By default, bisync compares files by modification time and size. If you or your application should change the content of a file -without changing the modification time then bisync will _not_ +without changing the modification time and size, then bisync will _not_ notice the change, and thus will not copy it to the other side. - -Note that on some cloud storage systems it is not possible to have file -timestamps that match _precisely_ between the local and other filesystems. - -Bisync's approach to this problem is by tracking the changes on each side -_separately_ over time with a local database of files in that side then -applying the resulting changes on the other side. +As an alternative, consider comparing by checksum (if your remotes support it). +See [`--compare`](#compare) for details. ### Error handling {#error-handling} @@ -20097,7 +22032,8 @@ typically at `${HOME}/.cache/rclone/bisync/` on Linux. Some errors are considered temporary and re-running the bisync is not blocked. The _critical return_ blocks further bisync runs. -See also: [`--resilient`](#resilient) +See also: [`--resilient`](#resilient), [`--recover`](#recover), +[`--max-lock`](#max-lock), [Graceful Shutdown](#graceful-shutdown) ### Lock file @@ -20109,6 +22045,8 @@ Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by _cron_) runs when the prior invocation is taking a long time. The lock file contains _PID_ of the blocking process, which may help in debug. +Lock files can be set to automatically expire after a certain amount of time, +using the [`--max-lock`](#max-lock) flag. **Note** that while concurrent bisync runs are allowed, _be very cautious_ @@ -20122,6 +22060,32 @@ lest there be replicated files, deleted files and general mayhem. - `1` for a non-critical failing run (a rerun may be successful), - `2` for a critically aborted run (requires a `--resync` to recover). +### Graceful Shutdown + +Bisync has a "Graceful Shutdown" mode which is activated by sending `SIGINT` or +pressing `Ctrl+C` during a run. Once triggered, bisync will use best efforts to +exit cleanly before the timer runs out. If bisync is in the middle of +transferring files, it will attempt to cleanly empty its queue by finishing +what it has started but not taking more. If it cannot do so within 30 seconds, +it will cancel the in-progress transfers at that point and then give itself a +maximum of 60 seconds to wrap up, save its state for next time, and exit. With +the `-vP` flags you will see constant status updates and a final confirmation +of whether or not the graceful shutdown was successful. + +At any point during the "Graceful Shutdown" sequence, a second `SIGINT` or +`Ctrl+C` will trigger an immediate, un-graceful exit, which will leave things +in a messier state. Usually a robust recovery will still be possible if using +[`--recover`](#recover) mode, otherwise you will need to do a `--resync`. + +If you plan to use Graceful Shutdown mode, it is recommended to use +[`--resilient`](#resilient) and [`--recover`](#recover), and it is important to +NOT use [`--inplace`](https://rclone.org/docs/#inplace), otherwise you risk leaving +partially-written files on one side, which may be confused for real files on +the next run. Note also that in the event of an abrupt interruption, a [lock +file](#lock-file) will be left behind to block concurrent runs. You will need +to delete it before you can proceed with the next run (or wait for it to +expire on its own, if using `--max-lock`.) + ## Limitations ### Supported backends @@ -20134,62 +22098,39 @@ Bisync is considered _BETA_ and has been tested with the following backends: - S3 - SFTP - Yandex Disk +- Crypt It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below. -First release of `rclone bisync` requires that underlying backend supports -the modification time feature and will refuse to run otherwise. -This limitation will be lifted in a future `rclone bisync` release. +The first release of `rclone bisync` required both underlying backends to support +modification times, and refused to run otherwise. +This limitation has been lifted as of `v1.66`, as bisync now supports comparing +checksum and/or size instead of (or in addition to) modtime. +See [`--compare`](#compare) for details. ### Concurrent modifications -When using **Local, FTP or SFTP** remotes rclone does not create _temporary_ +When using **Local, FTP or SFTP** remotes with [`--inplace`](https://rclone.org/docs/#inplace), rclone does not create _temporary_ files at the destination when copying, and thus if the connection is lost the created file may be corrupt, which will likely propagate back to the original path on the next sync, resulting in data loss. -This will be solved in a future release, there is no workaround at the moment. +It is therefore recommended to _omit_ `--inplace`. Files that **change during** a bisync run may result in data loss. -This has been seen in a highly dynamic environment, where the filesystem -is getting hammered by running processes during the sync. -The currently recommended solution is to sync at quiet times or [filter out](#filtering) -unnecessary directories and files. - -As an [alternative approach](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=scans%2C%20to%20avoid-,errors%20if%20files%20changed%20during%20sync,-Given%20the%20number), -consider using `--check-sync=false` (and possibly `--resilient`) to make bisync more forgiving -of filesystems that change during the sync. -Be advised that this may cause bisync to miss events that occur during a bisync run, -so it is a good idea to supplement this with a periodic independent integrity check, -and corrective sync if diffs are found. For example, a possible sequence could look like this: - -1. Normally scheduled bisync run: - -``` -rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -``` - -2. Periodic independent integrity check (perhaps scheduled nightly or weekly): - -``` -rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt -``` - -3. If diffs are found, you have some choices to correct them. -If one side is more up-to-date and you want to make the other side match it, you could run: - -``` -rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v -``` -(or switch Path1 and Path2 to make Path2 the source-of-truth) - -Or, if neither side is totally up-to-date, you could run a `--resync` to bring them back into agreement -(but remember that this could cause deleted files to re-appear.) - -*Note also that `rclone check` does not currently include empty directories, -so if you want to know if any empty directories are out of sync, -consider alternatively running the above `rclone sync` command with `--dry-run` added. +Prior to `rclone v1.66`, this was commonly seen in highly dynamic environments, where the filesystem +was getting hammered by running processes during the sync. +As of `rclone v1.66`, bisync was redesigned to use a "snapshot" model, +greatly reducing the risks from changes during a sync. +Changes that are not detected during the current sync will now be detected during the following sync, +and will no longer cause the entire run to throw a critical error. +There is additionally a mechanism to mark files as needing to be internally rechecked next time, for added safety. +It should therefore no longer be necessary to sync only at quiet times -- +however, note that an error can still occur if a file happens to change at the exact moment it's +being read/written by bisync (same as would happen in `rclone sync`.) +(See also: [`--ignore-checksum`](https://rclone.org/docs/#ignore-checksum), +[`--local-no-check-updated`](https://rclone.org/local/#local-no-check-updated)) ### Empty directories @@ -20209,11 +22150,17 @@ and use `--resync` when you need to switch. ### Renamed directories -Renaming a folder on the Path1 side results in deleting all files on +By default, renaming a folder on the Path1 side results in deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. -Currently, the most effective and efficient method of renaming a directory + +A recommended solution is to use [`--track-renames`](https://rclone.org/docs/#track-renames), +which is now supported in bisync as of `rclone v1.66`. +Note that `--track-renames` is not available during `--resync`, +as `--resync` does not delete anything (`--track-renames` only supports `sync`, not `copy`.) + +Otherwise, the most effective and efficient method of renaming a directory is to rename it to the same name on both sides. (As of `rclone v1.64`, a `--resync` is no longer required after doing so, as bisync will automatically detect that Path1 and Path2 are in agreement.) @@ -20227,25 +22174,20 @@ and there is also a [known issue concerning Google Drive users with many empty d For now, the recommended way to avoid using `--fast-list` is to add `--disable ListR` to all bisync commands. The default behavior may change in a future version. -### Overridden Configs +### Case (and unicode) sensitivity {#case-sensitivity} -When rclone detects an overridden config, it adds a suffix like `{ABCDE}` on the fly -to the internal name of the remote. Bisync follows suit by including this suffix in its listing filenames. -However, this suffix does not necessarily persist from run to run, especially if different flags are provided. -So if next time the suffix assigned is `{FGHIJ}`, bisync will get confused, -because it's looking for a listing file with `{FGHIJ}`, when the file it wants has `{ABCDE}`. -As a result, it throws -`Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run` -and refuses to run again until the user runs a `--resync` (unless using `--resilient`). -The best workaround at the moment is to set any backend-specific flags in the [config file](https://rclone.org/commands/rclone_config/) -instead of specifying them with command flags. (You can still override them as needed for other rclone commands.) +As of `v1.66`, case and unicode form differences no longer cause critical errors, +and normalization (when comparing between filesystems) is handled according to the same flags and defaults as `rclone sync`. +See the following options (all of which are supported by bisync) to control this behavior more granularly: +- [`--fix-case`](https://rclone.org/docs/#fix-case) +- [`--ignore-case-sync`](https://rclone.org/docs/#ignore-case-sync) +- [`--no-unicode-normalization`](https://rclone.org/docs/#no-unicode-normalization) +- [`--local-unicode-normalization`](https://rclone.org/local/#local-unicode-normalization) and +[`--local-case-sensitive`](https://rclone.org/local/#local-case-sensitive) (caution: these are normally not what you want.) -### Case sensitivity - -Synching with **case-insensitive** filesystems, such as Windows or `Box`, -can result in file name conflicts. This will be fixed in a future release. -The near-term workaround is to make sure that files on both sides -don't have spelling case differences (`Smile.jpg` vs. `smile.jpg`). +Note that in the (probably rare) event that `--fix-case` is used AND a file is new/changed on both sides +AND the checksums match AND the filename case does not match, the Path1 filename is considered the winner, +for the purposes of `--fix-case` (Path2 will be renamed to match it). ## Windows support {#windows} @@ -20526,23 +22468,58 @@ files are generating complaints. If the error is consider using the flag [--drive-acknowledge-abuse](https://rclone.org/drive/#drive-acknowledge-abuse). -### Google Doc files +### Google Docs (and other files of unknown size) {#gdocs} -Google docs exist as virtual files on Google Drive and cannot be transferred -to other filesystems natively. While it is possible to export a Google doc to -a normal file (with `.xlsx` extension, for example), it is not possible -to import a normal file back into a Google document. +As of `v1.66`, [Google Docs](https://rclone.org/drive/#import-export-of-google-documents) +(including Google Sheets, Slides, etc.) are now supported in bisync, subject to +the same options, defaults, and limitations as in `rclone sync`. When bisyncing +drive with non-drive backends, the drive -> non-drive direction is controlled +by [`--drive-export-formats`](https://rclone.org/drive/#drive-export-formats) (default +`"docx,xlsx,pptx,svg"`) and the non-drive -> drive direction is controlled by +[`--drive-import-formats`](https://rclone.org/drive/#drive-import-formats) (default none.) -Bisync's handling of Google Doc files is to flag them in the run log output -for user's attention and ignore them for any file transfers, deletes, or syncs. -They will show up with a length of `-1` in the listings. -This bisync run is otherwise successful: +For example, with the default export/import formats, a Google Sheet on the +drive side will be synced to an `.xlsx` file on the non-drive side. In the +reverse direction, `.xlsx` files with filenames that match an existing Google +Sheet will be synced to that Google Sheet, while `.xlsx` files that do NOT +match an existing Google Sheet will be copied to drive as normal `.xlsx` files +(without conversion to Sheets, although the Google Drive web browser UI may +still give you the option to open it as one.) -``` -2021/05/11 08:23:15 INFO : Synching Path1 "/path/to/local/tree/base/" with Path2 "GDrive:" -2021/05/11 08:23:15 INFO : ...path2.lst-new: Ignoring incorrect line: "- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx" -2021/05/11 08:23:15 INFO : Bisync successful -``` +If `--drive-import-formats` is set (it's not, by default), then all of the +specified formats will be converted to Google Docs, if there is no existing +Google Doc with a matching name. Caution: such conversion can be quite lossy, +and in most cases it's probably not what you want! + +To bisync Google Docs as URL shortcut links (in a manner similar to "Drive for +Desktop"), use: `--drive-export-formats url` (or +[alternatives](https://rclone.org/drive/#exportformats:~:text=available%20Google%20Documents.-,Extension,macOS,-Standard%20options).) + +Note that these link files cannot be edited on the non-drive side -- you will +get errors if you try to sync an edited link file back to drive. They CAN be +deleted (it will result in deleting the corresponding Google Doc.) If you +create a `.url` file on the non-drive side that does not match an existing +Google Doc, bisyncing it will just result in copying the literal `.url` file +over to drive (no Google Doc will be created.) So, as a general rule of thumb, +think of them as read-only placeholders on the non-drive side, and make all +your changes on the drive side. + +Likewise, even with other export-formats, it is best to only move/rename Google +Docs on the drive side. This is because otherwise, bisync will interpret this +as a file deleted and another created, and accordingly, it will delete the +Google Doc and create a new file at the new path. (Whether or not that new file +is a Google Doc depends on `--drive-import-formats`.) + +Lastly, take note that all Google Docs on the drive side have a size of `-1` +and no checksum. Therefore, they cannot be reliably synced with the +`--checksum` or `--size-only` flags. (To be exact: they will still get +created/deleted, and bisync's delta engine will notice changes and queue them +for syncing, but the underlying sync function will consider them identical and +skip them.) To work around this, use the default (modtime and size) instead of +`--checksum` or `--size-only`. + +To ignore Google Docs entirely, use +[`--drive-skip-gdocs`](https://rclone.org/drive/#drive-skip-gdocs). ## Usage examples @@ -20920,6 +22897,30 @@ about _Unison_ and synchronization in general. ## Changelog +### `v1.66` +* Copies and deletes are now handled in one operation instead of two +* `--track-renames` and `--backup-dir` are now supported +* Partial uploads known issue on `local`/`ftp`/`sftp` has been resolved (unless using `--inplace`) +* Final listings are now generated from sync results, to avoid needing to re-list +* Bisync is now much more resilient to changes that happen during a bisync run, and far less prone to critical errors / undetected changes +* Bisync is now capable of rolling a file listing back in cases of uncertainty, essentially marking the file as needing to be rechecked next time. +* A few basic terminal colors are now supported, controllable with [`--color`](https://rclone.org/docs/#color-when) (`AUTO`|`NEVER`|`ALWAYS`) +* Initial listing snapshots of Path1 and Path2 are now generated concurrently, using the same "march" infrastructure as `check` and `sync`, +for performance improvements and less [risk of error](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=4.%20Listings%20should%20alternate%20between%20paths%20to%20minimize%20errors). +* Fixed handling of unicode normalization and case insensitivity, support for [`--fix-case`](https://rclone.org/docs/#fix-case), [`--ignore-case-sync`](/docs/#ignore-case-sync), [`--no-unicode-normalization`](/docs/#no-unicode-normalization) +* `--resync` is now much more efficient (especially for users of `--create-empty-src-dirs`) +* Google Docs (and other files of unknown size) are now supported (with the same options as in `sync`) +* Equality checks before a sync conflict rename now fall back to `cryptcheck` (when possible) or `--download`, +instead of of `--size-only`, when `check` is not available. +* Bisync no longer fails to find the correct listing file when configs are overridden with backend-specific flags. +* Bisync now fully supports comparing based on any combination of size, modtime, and checksum, lifting the prior restriction on backends without modtime support. +* Bisync now supports a "Graceful Shutdown" mode to cleanly cancel a run early without requiring `--resync`. +* New `--recover` flag allows robust recovery in the event of interruptions, without requiring `--resync`. +* A new `--max-lock` setting allows lock files to automatically renew and expire, for better automatic recovery when a run is interrupted. +* Bisync now supports auto-resolving sync conflicts and customizing rename behavior with new [`--conflict-resolve`](#conflict-resolve), [`--conflict-loser`](#conflict-loser), and [`--conflict-suffix`](#conflict-suffix) flags. +* A new [`--resync-mode`](#resync-mode) flag allows more control over which version of a file gets kept during a `--resync`. +* Bisync now supports [`--retries`](https://rclone.org/docs/#retries-int) and [`--retries-sleep`](/docs/#retries-sleep-time) (when [`--resilient`](#resilient) is set.) + ### `v1.64` * Fixed an [issue](https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Dry%20runs%20are%20not%20completely%20dry) causing dry runs to inadvertently commit filter changes @@ -21282,6 +23283,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot +#### --fichier-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FICHIER_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -21401,341 +23413,22 @@ Properties: - Type: string - Required: true - - -# Amazon Drive - -Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage -service run by Amazon for consumers. - -## Status - -**Important:** rclone supports Amazon Drive only if you have your own -set of API keys. Unfortunately the [Amazon Drive developer -program](https://developer.amazon.com/amazon-drive) is now closed to -new entries so if you don't already have your own set of keys you will -not be able to use rclone with Amazon Drive. - -For the history on why rclone no longer has a set of Amazon Drive API -keys see [the forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314). - -If you happen to know anyone who works at Amazon then please ask them -to re-instate rclone into the Amazon Drive developer program - thanks! - -## Configuration - -The initial setup for Amazon Drive involves getting a token from -Amazon which you need to do in your browser. `rclone config` walks -you through it. - -The configuration process for Amazon Drive may involve using an [oauth -proxy](https://github.com/ncw/oauthproxy). This is used to keep the -Amazon credentials out of the source code. The proxy runs in Google's -very secure App Engine environment and doesn't store any credentials -which pass through it. - -Since rclone doesn't currently have its own Amazon Drive credentials -so you will either need to have your own `client_id` and -`client_secret` with Amazon Drive, or use a third-party oauth proxy -in which case you will need to enter `client_id`, `client_secret`, -`auth_url` and `token_url`. - -Note also if you are not using Amazon's `auth_url` and `token_url`, -(ie you filled in something for those) then if setting up on a remote -machine you can only use the [copying the config method of -configuration](https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) -- `rclone authorize` will not work. - -Here is an example of how to make a remote called `remote`. First run: - - rclone config - -This will guide you through an interactive setup process: - -``` -No remotes found, make a new one? -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Amazon Drive - \ "amazon cloud drive" -[snip] -Storage> amazon cloud drive -Amazon Application Client Id - required. -client_id> your client ID goes here -Amazon Application Client Secret - required. -client_secret> your client secret goes here -Auth server URL - leave blank to use Amazon's. -auth_url> Optional auth URL -Token server url - leave blank to use Amazon's. -token_url> Optional token URL -Remote config -Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = your client ID goes here -client_secret = your client secret goes here -auth_url = Optional auth URL -token_url = Optional token URL -token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -``` - -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Amazon. This only runs from the moment it -opens your browser to the moment you get back the verification -code. This is on `http://127.0.0.1:53682/` and this it may require -you to unblock it temporarily if you are running a host firewall. - -Once configured you can then use `rclone` like this, - -List directories in top level of your Amazon Drive - - rclone lsd remote: - -List all the files in your Amazon Drive - - rclone ls remote: - -To copy a local directory to an Amazon Drive directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - -Amazon Drive doesn't allow modification times to be changed via -the API so these won't be accurate or used for syncing. - -It does support the MD5 hash algorithm, so for a more accurate sync, -you can use the `--checksum` flag. - -### Restricted filename characters - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| NUL | 0x00 | ␀ | -| / | 0x2F | / | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can't be used in JSON strings. - -### Deleting files - -Any files you delete with rclone will end up in the trash. Amazon -don't provide an API to permanently delete files, nor to empty the -trash, so you will have to do that with one of Amazon's apps or via -the Amazon Drive website. As of November 17, 2016, files are -automatically deleted by Amazon from the trash after 30 days. - -### Using with non `.com` Amazon accounts - -Let's say you usually use `amazon.co.uk`. When you authenticate with -rclone it will take you to an `amazon.com` page to log in. Your -`amazon.co.uk` email and password should work here just fine. - - -### Standard options - -Here are the Standard options specific to amazon cloud drive (Amazon Drive). - -#### --acd-client-id - -OAuth Client Id. - -Leave blank normally. - -Properties: - -- Config: client_id -- Env Var: RCLONE_ACD_CLIENT_ID -- Type: string -- Required: false - -#### --acd-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_ACD_CLIENT_SECRET -- Type: string -- Required: false - ### Advanced options -Here are the Advanced options specific to amazon cloud drive (Amazon Drive). +Here are the Advanced options specific to alias (Alias for an existing remote). -#### --acd-token +#### --alias-description -OAuth Access Token as a JSON blob. +Description of the remote Properties: -- Config: token -- Env Var: RCLONE_ACD_TOKEN +- Config: description +- Env Var: RCLONE_ALIAS_DESCRIPTION - Type: string - Required: false -#### --acd-auth-url -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_ACD_AUTH_URL -- Type: string -- Required: false - -#### --acd-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_ACD_TOKEN_URL -- Type: string -- Required: false - -#### --acd-checkpoint - -Checkpoint for internal polling (debug). - -Properties: - -- Config: checkpoint -- Env Var: RCLONE_ACD_CHECKPOINT -- Type: string -- Required: false - -#### --acd-upload-wait-per-gb - -Additional time per GiB to wait after a failed complete upload to see if it appears. - -Sometimes Amazon Drive gives an error when a file has been fully -uploaded but the file appears anyway after a little while. This -happens sometimes for files over 1 GiB in size and nearly every time for -files bigger than 10 GiB. This parameter controls the time rclone waits -for the file to appear. - -The default value for this parameter is 3 minutes per GiB, so by -default it will wait 3 minutes for every GiB uploaded to see if the -file appears. - -You can disable this feature by setting it to 0. This may cause -conflict errors as rclone retries the failed upload but the file will -most likely appear correctly eventually. - -These values were determined empirically by observing lots of uploads -of big files for a range of file sizes. - -Upload with the "-v" flag to see more info about what rclone is doing -in this situation. - -Properties: - -- Config: upload_wait_per_gb -- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB -- Type: Duration -- Default: 3m0s - -#### --acd-templink-threshold - -Files >= this size will be downloaded via their tempLink. - -Files this size or more will be downloaded via their "tempLink". This -is to work around a problem with Amazon Drive which blocks downloads -of files bigger than about 10 GiB. The default for this is 9 GiB which -shouldn't need to be changed. - -To download files above this threshold, rclone requests a "tempLink" -which downloads the file through a temporary URL directly from the -underlying S3 storage. - -Properties: - -- Config: templink_threshold -- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD -- Type: SizeSuffix -- Default: 9Gi - -#### --acd-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_ACD_ENCODING -- Type: Encoding -- Default: Slash,InvalidUtf8,Dot - - - -## Limitations - -Note that Amazon Drive is case insensitive so you can't have a -file called "Hello.doc" and one called "hello.doc". - -Amazon Drive has rate limiting so you may notice errors in the -sync (429 errors). rclone will automatically retry the sync up to 3 -times by default (see `--retries` flag) which should hopefully work -around this problem. - -Amazon Drive has an internal limit of file sizes that can be uploaded -to the service. This limit is not officially published, but all files -larger than this will fail. - -At the time of writing (Jan 2016) is in the area of 50 GiB per file. -This means that larger files are likely to fail. - -Unfortunately there is no way for rclone to see that this failure is -because of file size, so it will retry the operation, as any other -failure. To avoid this problem, use `--max-size 50000M` option to limit -the maximum size of uploaded files. Note that `--max-size` does not split -files into segments, it only ignores files over this size. - -`rclone about` is not supported by the Amazon Drive backend. Backends without -this capability cannot determine free space for an rclone mount or -use policy `mfs` (most free space) as a member of an rclone union -remote. - -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) # Amazon S3 Storage Providers @@ -21817,7 +23510,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Liara, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" [snip] Storage> s3 @@ -22323,6 +24016,7 @@ permissions are required to be available on the bucket being written to: * `GetObject` * `PutObject` * `PutObjectACL` +* `CreateBucket` (unless using [s3-no-check-bucket](#s3-no-check-bucket)) When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required. @@ -22364,6 +24058,7 @@ Notes on above: that `USER_NAME` has been created. 2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects. +3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exsits, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included. For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) that will generate one or more buckets that will work with `rclone sync`. @@ -23097,10 +24792,10 @@ Properties: #### --s3-upload-concurrency -Concurrency for multipart uploads. +Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded -concurrently. +concurrently for multipart uploads and copies. If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing @@ -23149,6 +24844,19 @@ Properties: - Type: bool - Default: false +#### --s3-use-dual-stack + +If true use AWS S3 dual-stack endpoint (IPv6 support). + +See [AWS Docs on Dualstack Endpoints](https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html) + +Properties: + +- Config: use_dual_stack +- Env Var: RCLONE_S3_USE_DUAL_STACK +- Type: bool +- Default: false + #### --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. @@ -23453,6 +25161,25 @@ Properties: - Type: Time - Default: off +#### --s3-version-deleted + +Show deleted file markers when using versions. + +This shows deleted file markers in the listing when using versions. These will appear +as 0 size files. The only operation which can be performed on them is deletion. + +Deleting a delete marker will reveal the previous version. + +Deleted files will always show with a timestamp. + + +Properties: + +- Config: version_deleted +- Env Var: RCLONE_S3_VERSION_DELETED +- Type: bool +- Default: false + #### --s3-decompress If set this will decompress gzip encoded objects. @@ -23603,6 +25330,17 @@ Properties: - Type: Tristate - Default: unset +#### --s3-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_S3_DESCRIPTION +- Type: string +- Required: false + ### Metadata User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case. @@ -24183,10 +25921,10 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] -Storage> 5 +Storage> s3 Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -24307,18 +26045,11 @@ To configure access to IBM COS S3, follow the steps below: 3. Select "s3" storage. ``` Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, Liara, ArvanCloud, Minio, IBM COS) - \ "s3" - 4 / Backblaze B2 - \ "b2" [snip] - 23 / HTTP - \ "http" -Storage> 3 +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" +[snip] +Storage> s3 ``` 4. Select IBM COS as the S3 Storage Provider. @@ -24478,7 +26209,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -24584,7 +26315,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -24822,15 +26553,8 @@ name> qiniu ``` Choose a number from below, or type in your own value - 1 / 1Fichier - \ (fichier) - 2 / Akamai NetStorage - \ (netstorage) - 3 / Alias for an existing remote - \ (alias) - 4 / Amazon Drive - \ (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -25095,7 +26819,7 @@ Choose `s3` backend Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -25396,7 +27120,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" [snip] Storage> s3 @@ -25506,7 +27230,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. ... - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) ... Storage> s3 @@ -25762,15 +27486,8 @@ name> leviia ``` Choose a number from below, or type in your own value - 1 / 1Fichier - \ (fichier) - 2 / Akamai NetStorage - \ (netstorage) - 3 / Alias for an existing remote - \ (alias) - 4 / Amazon Drive - \ (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -25983,7 +27700,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others +XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others \ (s3) [snip] Storage> s3 @@ -26228,13 +27945,8 @@ name> cos ``` Choose a number from below, or type in your own value -1 / 1Fichier - \ "fichier" - 2 / Alias for an existing remote - \ "alias" - 3 / Amazon Drive - \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" [snip] Storage> s3 @@ -26642,7 +28354,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" Storage> s3 @@ -27293,9 +29005,12 @@ Properties: #### --b2-download-auth-duration -Time before the authorization token will expire in s or suffix ms|s|m|h|d. +Time before the public link authorization token will expire in s or suffix ms|s|m|h|d. + +This is used in combination with "rclone link" for making files +accessible to the public and sets the duration before the download +authorization token will expire. -The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. Properties: @@ -27371,6 +29086,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --b2-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_B2_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the b2 backend. @@ -27918,6 +29644,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot +#### --box-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_BOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -28624,6 +30361,17 @@ Properties: - Type: Duration - Default: 1s +#### --cache-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CACHE_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the cache backend. @@ -29120,6 +30868,17 @@ Properties: - If meta format is set to "none", rename transactions will always be used. - This method is EXPERIMENTAL, don't use on production systems. +#### --chunker-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CHUNKER_DESCRIPTION +- Type: string +- Required: false + # Citrix ShareFile @@ -29421,6 +31180,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --sharefile-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SHAREFILE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -30011,6 +31781,22 @@ Properties: - Type: bool - Default: false +#### --crypt-strict-names + +If set, this will raise an error when crypt comes across a filename that can't be decrypted. + +(By default, rclone will just log a NOTICE and continue as normal.) +This can happen if encrypted and unencrypted files are stored in the same +directory (which is not recommended.) It may also indicate a more serious +problem that should be investigated. + +Properties: + +- Config: strict_names +- Env Var: RCLONE_CRYPT_STRICT_NAMES +- Type: bool +- Default: false + #### --crypt-filename-encoding How to encode the encrypted filename to text string. @@ -30048,6 +31834,17 @@ Properties: - Type: string - Default: ".bin" +#### --crypt-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CRYPT_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -30216,7 +32013,7 @@ encoding is modified in two ways: * we strip the padding character `=` `base32` is used rather than the more efficient `base64` so rclone can be -used on case insensitive remotes (e.g. Windows, Amazon Drive). +used on case insensitive remotes (e.g. Windows, Box, Dropbox, Onedrive etc). ### Key derivation @@ -30386,6 +32183,17 @@ Properties: - Type: SizeSuffix - Default: 20Mi +#### --compress-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_COMPRESS_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -30544,6 +32352,21 @@ Properties: - Type: SpaceSepList - Default: +### Advanced options + +Here are the Advanced options specific to combine (Combine several remotes into one). + +#### --combine-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_COMBINE_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -31001,6 +32824,17 @@ Properties: - Type: Duration - Default: 10m0s +#### --dropbox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_DROPBOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -31324,6 +33158,17 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,InvalidUtf8,Dot +#### --filefabric-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FILEFABRIC_DESCRIPTION +- Type: string +- Required: false + # FTP @@ -31775,6 +33620,17 @@ Properties: - "Ctl,LeftPeriod,Slash" - VsFTPd can't handle file names starting with dot +#### --ftp-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FTP_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -32523,6 +34379,17 @@ Properties: - Type: Encoding - Default: Slash,CrLf,InvalidUtf8,Dot +#### --gcs-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_GCS_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -33916,10 +35783,23 @@ Properties: - "true" - Get GCP IAM credentials from the environment (env vars or IAM). +#### --drive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_DRIVE_DESCRIPTION +- Type: string +- Required: false + ### Metadata User metadata is stored in the properties field of the drive object. +Metadata is supported on files and directories. + Here are the possible system metadata items for the drive backend. | Name | Help | Type | Example | Read Only | @@ -34746,6 +36626,17 @@ Properties: - Type: Duration - Default: 10m0s +#### --gphotos-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_GPHOTOS_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -35061,6 +36952,17 @@ Properties: - Type: SizeSuffix - Default: 0 +#### --hasher-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HASHER_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -35410,6 +37312,17 @@ Properties: - Type: Encoding - Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot +#### --hdfs-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HDFS_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -35831,6 +37744,17 @@ Properties: - Type: Encoding - Default: Slash,Dot +#### --hidrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HIDRIVE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -36066,6 +37990,17 @@ Properties: - Type: bool - Default: false +#### --http-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HTTP_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the http backend. @@ -36304,6 +38239,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket +#### --imagekit-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_IMAGEKIT_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -36587,6 +38533,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot +#### --internetarchive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_INTERNETARCHIVE_DESCRIPTION +- Type: string +- Required: false + ### Metadata Metadata fields provided by Internet Archive. @@ -37061,6 +39018,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot +#### --jottacloud-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_JOTTACLOUD_DESCRIPTION +- Type: string +- Required: false + ### Metadata Jottacloud has limited support for metadata, currently an extended set of timestamps. @@ -37303,6 +39271,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --koofr-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_KOOFR_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -37520,6 +39499,21 @@ Properties: - Type: string - Required: true +### Advanced options + +Here are the Advanced options specific to linkbox (Linkbox). + +#### --linkbox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_LINKBOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -37935,6 +39929,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --mailru-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MAILRU_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -38224,6 +40229,17 @@ Properties: - Type: Encoding - Default: Slash,InvalidUtf8,Dot +#### --mega-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MEGA_DESCRIPTION +- Type: string +- Required: false + ### Process `killed` @@ -38299,6 +40315,21 @@ The memory backend replaces the [default restricted characters set](https://rclone.org/overview/#restricted-characters). +### Advanced options + +Here are the Advanced options specific to memory (In memory object storage system.). + +#### --memory-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MEMORY_DESCRIPTION +- Type: string +- Required: false + # Akamai NetStorage @@ -38539,6 +40570,17 @@ Properties: - "https" - HTTPS protocol +#### --netstorage-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_NETSTORAGE_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the netstorage backend. @@ -39404,6 +41446,35 @@ Properties: - Type: bool - Default: false +#### --azureblob-delete-snapshots + +Set to specify how to deal with snapshots on blob deletion. + +Properties: + +- Config: delete_snapshots +- Env Var: RCLONE_AZUREBLOB_DELETE_SNAPSHOTS +- Type: string +- Required: false +- Choices: + - "" + - By default, the delete operation fails if a blob has snapshots + - "include" + - Specify 'include' to remove the root blob and all its snapshots + - "only" + - Specify 'only' to remove only the snapshots but keep the root blob. + +#### --azureblob-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_AZUREBLOB_DESCRIPTION +- Type: string +- Required: false + ### Custom upload headers @@ -40130,6 +42201,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot +#### --azurefiles-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_AZUREFILES_DESCRIPTION +- Type: string +- Required: false + ### Custom upload headers @@ -40769,7 +42851,7 @@ Properties: If set rclone will use delta listing to implement recursive listings. -If this flag is set the the onedrive backend will advertise `ListR` +If this flag is set the onedrive backend will advertise `ListR` support for recursive listings. Setting this flag speeds up these things greatly: @@ -40802,6 +42884,30 @@ Properties: - Type: bool - Default: false +#### --onedrive-metadata-permissions + +Control whether permissions should be read or written in metadata. + +Reading permissions metadata from files can be done quickly, but it +isn't always desirable to set the permissions from the metadata. + + +Properties: + +- Config: metadata_permissions +- Env Var: RCLONE_ONEDRIVE_METADATA_PERMISSIONS +- Type: Bits +- Default: off +- Examples: + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "read,write" + - Read and Write the value. + #### --onedrive-encoding The encoding for the backend. @@ -40815,6 +42921,191 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --onedrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_ONEDRIVE_DESCRIPTION +- Type: string +- Required: false + +### Metadata + +OneDrive supports System Metadata (not User Metadata, as of this writing) for +both files and directories. Much of the metadata is read-only, and there are some +differences between OneDrive Personal and Business (see table below for +details). + +Permissions are also supported, if `--onedrive-metadata-permissions` is set. The +accepted values for `--onedrive-metadata-permissions` are `read`, `write`, +`read,write`, and `off` (the default). `write` supports adding new permissions, +updating the "role" of existing permissions, and removing permissions. Updating +and removing require the Permission ID to be known, so it is recommended to use +`read,write` instead of `write` if you wish to update/remove permissions. + +Permissions are read/written in JSON format using the same schema as the +[OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online), +which differs slightly between OneDrive Personal and Business. + +Example for OneDrive Personal: +```json +[ + { + "id": "1234567890ABC!123", + "grantedTo": { + "user": { + "id": "ryan@contoso.com" + }, + "application": {}, + "device": {} + }, + "invitation": { + "email": "ryan@contoso.com" + }, + "link": { + "webUrl": "https://1drv.ms/t/s!1234567890ABC" + }, + "roles": [ + "read" + ], + "shareId": "s!1234567890ABC" + } +] +``` + +Example for OneDrive Business: +```json +[ + { + "id": "48d31887-5fad-4d73-a9f5-3c356e68a038", + "grantedToIdentities": [ + { + "user": { + "displayName": "ryan@contoso.com" + }, + "application": {}, + "device": {} + } + ], + "link": { + "type": "view", + "scope": "users", + "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s" + }, + "roles": [ + "read" + ], + "shareId": "u!LKj1lkdlals90j1nlkascl" + }, + { + "id": "5D33DD65C6932946", + "grantedTo": { + "user": { + "displayName": "John Doe", + "id": "efee1b77-fb3b-4f65-99d6-274c11914d12" + }, + "application": {}, + "device": {} + }, + "roles": [ + "owner" + ], + "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U" + } +] +``` + +To write permissions, pass in a "permissions" metadata key using this same +format. The [`--metadata-mapper`](https://rclone.org/docs/#metadata-mapper) tool can +be very helpful for this. + +When adding permissions, an email address can be provided in the `User.ID` or +`DisplayName` properties of `grantedTo` or `grantedToIdentities`. Alternatively, +an ObjectID can be provided in `User.ID`. At least one valid recipient must be +provided in order to add a permission for a user. Creating a Public Link is also +supported, if `Link.Scope` is set to `"anonymous"`. + +Example request to add a "read" permission: + +```json +[ + { + "id": "", + "grantedTo": { + "user": {}, + "application": {}, + "device": {} + }, + "grantedToIdentities": [ + { + "user": { + "id": "ryan@contoso.com" + }, + "application": {}, + "device": {} + } + ], + "roles": [ + "read" + ] + } +] +``` + +Note that adding a permission can fail if a conflicting permission already +exists for the file/folder. + +To update an existing permission, include both the Permission ID and the new +`roles` to be assigned. `roles` is the only property that can be changed. + +To remove permissions, pass in a blob containing only the permissions you wish +to keep (which can be empty, to remove all.) + +Note that both reading and writing permissions requires extra API calls, so if +you don't need to read or write permissions it is recommended to omit +`--onedrive-metadata-permissions`. + +Metadata and permissions are supported for Folders (directories) as well as +Files. Note that setting the `mtime` or `btime` on a Folder requires one extra +API call on OneDrive Business only. + +OneDrive does not currently support User Metadata. When writing metadata, only +writeable system properties will be written -- any read-only or unrecognized keys +passed in will be ignored. + +TIP: to see the metadata and permissions for any file or folder, run: + +``` +rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read +``` + +Here are the possible system metadata items for the onedrive backend. + +| Name | Help | Type | Example | Read Only | +|------|------|------|---------|-----------| +| btime | Time of file birth (creation) with S accuracy (mS for OneDrive Personal). | RFC 3339 | 2006-01-02T15:04:05Z | N | +| content-type | The MIME type of the file. | string | text/plain | **Y** | +| created-by-display-name | Display name of the user that created the item. | string | John Doe | **Y** | +| created-by-id | ID of the user that created the item. | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** | +| description | A short description of the file. Max 1024 characters. Only supported for OneDrive Personal. | string | Contract for signing | N | +| id | The unique identifier of the item within OneDrive. | string | 01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K | **Y** | +| last-modified-by-display-name | Display name of the user that last modified the item. | string | John Doe | **Y** | +| last-modified-by-id | ID of the user that last modified the item. | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** | +| malware-detected | Whether OneDrive has detected that the item contains malware. | boolean | true | **Y** | +| mtime | Time of last modification with S accuracy (mS for OneDrive Personal). | RFC 3339 | 2006-01-02T15:04:05Z | N | +| package-type | If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others. | string | oneNote | **Y** | +| permissions | Permissions in a JSON dump of OneDrive format. Enable with --onedrive-metadata-permissions. Properties: id, grantedTo, grantedToIdentities, invitation, inheritedFrom, link, roles, shareId | JSON | {} | N | +| shared-by-id | ID of the user that shared the item (if shared). | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** | +| shared-owner-id | ID of the owner of the shared item (if shared). | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** | +| shared-scope | If shared, indicates the scope of how the item is shared: anonymous, organization, or users. | string | users | **Y** | +| shared-time | Time when the item was shared, with S accuracy (mS for OneDrive Personal). | RFC 3339 | 2006-01-02T15:04:05Z | **Y** | +| utime | Time of upload with S accuracy (mS for OneDrive Personal). | RFC 3339 | 2006-01-02T15:04:05Z | **Y** | + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + ## Limitations @@ -41204,6 +43495,17 @@ Properties: - Type: SizeSuffix - Default: 10Mi +#### --opendrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_OPENDRIVE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -41278,13 +43580,17 @@ Press Enter for the default (env_auth). 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm \ (user_principal_auth) - / use instance principals to authorize an instance to make API calls. - 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. + / use instance principals to authorize an instance to make API calls. + 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm \ (instance_principal_auth) - 4 / use resource principals to make API calls + / use workload identity to grant Kubernetes pods policy-driven access to Oracle Cloud + 4 | Infrastructure (OCI) resources using OCI Identity and Access Management (IAM). + | https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm + \ (workload_identity_auth) + 5 / use resource principals to make API calls \ (resource_principal_auth) - 5 / no credentials needed, this is typically for reading public buckets + 6 / no credentials needed, this is typically for reading public buckets \ (no_auth) provider> 2 @@ -41370,6 +43676,7 @@ Rclone supports the following OCI authentication provider. User Principal Instance Principal Resource Principal + Workload Identity No authentication ### User Principal @@ -41443,6 +43750,14 @@ Sample rclone configuration file for Authentication Provider Resource Principal: region = us-ashburn-1 provider = resource_principal_auth +### Workload Identity +Workload Identity auth may be used when running Rclone from Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster. +For more details on configuring Workload Identity, see [Granting Workloads Access to OCI Resources](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm). +To use workload identity, ensure Rclone is started with these environment variables set in its process. + + export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 + export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 + ### No authentication Public buckets do not require any authentication mechanism to read objects. @@ -41525,6 +43840,9 @@ Properties: - use instance principals to authorize an instance to make API calls. - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + - "workload_identity_auth" + - use workload identity to grant OCI Container Engine for Kubernetes workloads policy-driven access to OCI resources using OCI Identity and Access Management (IAM). + - https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm - "resource_principal_auth" - use resource principals to make API calls - "no_auth" @@ -41910,6 +44228,17 @@ Properties: - "AES256" - AES256 +#### --oos-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_OOS_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the oracleobjectstorage backend. @@ -41990,6 +44319,47 @@ Options: - "max-age": Max age of upload to delete +### restore + +Restore objects from Archive to Standard storage + + rclone backend restore remote: [options] [+] + +This command can be used to restore one or more objects from Archive to Standard storage. + + Usage Examples: + + rclone backend restore oos:bucket/path/to/directory -o hours=HOURS + rclone backend restore oos:bucket -o hours=HOURS + +This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags + + rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72 + +All the objects shown will be marked for restore, then + + rclone backend restore --include "*.txt" oos:bucket/path -o hours=72 + + It returns a list of status dictionaries with Object Name and Status + keys. The Status will be "RESTORED"" if it was successful or an error message + if not. + + [ + { + "Object": "test.txt" + "Status": "RESTORED", + }, + { + "Object": "test/file4.txt" + "Status": "RESTORED", + } + ] + + +Options: + +- "hours": The number of hours for which this object will be restored. Default is 24 hrs. + ## Tutorials @@ -42301,6 +44671,17 @@ Properties: - Type: Encoding - Default: Slash,Ctl,InvalidUtf8 +#### --qingstor-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_QINGSTOR_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -42535,7 +44916,7 @@ Properties: #### --quatrix-hard-delete -Delete files permanently rather than putting them into the trash. +Delete files permanently rather than putting them into the trash Properties: @@ -42544,6 +44925,28 @@ Properties: - Type: bool - Default: false +#### --quatrix-skip-project-folders + +Skip project folders in operations + +Properties: + +- Config: skip_project_folders +- Env Var: RCLONE_QUATRIX_SKIP_PROJECT_FOLDERS +- Type: bool +- Default: false + +#### --quatrix-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_QUATRIX_DESCRIPTION +- Type: string +- Required: false + ## Storage usage @@ -42745,6 +45148,17 @@ Properties: - Type: Encoding - Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot +#### --sia-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SIA_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -43341,6 +45755,17 @@ Properties: - Type: Encoding - Default: Slash,InvalidUtf8 +#### --swift-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SWIFT_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -43668,6 +46093,17 @@ Properties: - Type: string - Required: false +#### --pcloud-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PCLOUD_DESCRIPTION +- Type: string +- Required: false + # PikPak @@ -43906,6 +46342,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --pikpak-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PIKPAK_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the pikpak backend. @@ -44176,6 +46623,17 @@ Properties: - Type: Encoding - Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --premiumizeme-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PREMIUMIZEME_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -44515,6 +46973,17 @@ Properties: - Type: bool - Default: true +#### --protondrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -44737,6 +47206,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --putio-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PUTIO_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -45074,6 +47554,17 @@ Properties: - Type: bool - Default: true +#### --protondrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -45486,6 +47977,17 @@ Properties: - Type: Encoding - Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 +#### --seafile-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SEAFILE_DESCRIPTION +- Type: string +- Required: false + # SFTP @@ -46526,6 +49028,17 @@ Properties: - Type: bool - Default: false +#### --sftp-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SFTP_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -46807,6 +49320,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --smb-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SMB_DESCRIPTION +- Type: string +- Required: false + # Storj @@ -47098,6 +49622,21 @@ Properties: - Type: string - Required: false +### Advanced options + +Here are the Advanced options specific to storj (Storj Decentralized Cloud Storage). + +#### --storj-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_STORJ_DESCRIPTION +- Type: string +- Required: false + ## Usage @@ -47497,6 +50036,17 @@ Properties: - Type: Encoding - Default: Slash,Ctl,InvalidUtf8,Dot +#### --sugarsync-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SUGARSYNC_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -47655,6 +50205,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot +#### --uptobox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_UPTOBOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -47947,6 +50508,17 @@ Properties: - Type: SizeSuffix - Default: 1Gi +#### --union-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_UNION_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -48223,13 +50795,35 @@ Properties: - Type: SizeSuffix - Default: 10Mi +#### --webdav-owncloud-exclude-shares + +Exclude ownCloud shares + +Properties: + +- Config: owncloud_exclude_shares +- Env Var: RCLONE_WEBDAV_OWNCLOUD_EXCLUDE_SHARES +- Type: bool +- Default: false + +#### --webdav-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_WEBDAV_DESCRIPTION +- Type: string +- Required: false + ## Provider notes See below for notes on specific providers. -## Fastmail Files +### Fastmail Files Use `https://webdav.fastmail.com/` or a subdirectory as the URL, and your Fastmail email `username@domain.tld` as the username. @@ -48624,6 +51218,17 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,InvalidUtf8,Dot +#### --yandex-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_YANDEX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -48878,6 +51483,17 @@ Properties: - Type: Encoding - Default: Del,Ctl,InvalidUtf8 +#### --zoho-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_ZOHO_DESCRIPTION +- Type: string +- Required: false + ## Setting up your own client_id @@ -49457,6 +52073,17 @@ Properties: - Type: Encoding - Default: Slash,Dot +#### --local-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_LOCAL_DESCRIPTION +- Type: string +- Required: false + ### Metadata Depending on which OS is in use the local backend may return only some @@ -49468,6 +52095,8 @@ netbsd, macOS and Solaris. It is **not** supported on Windows yet User metadata is stored as extended attributes (which may not be supported by all file systems) under the "user.*" prefix. +Metadata is supported on files and directories. + Here are the possible system metadata items for the local backend. | Name | Help | Type | Example | Read Only | @@ -49516,6 +52145,238 @@ Options: # Changelog +## v1.66.0 - 2024-03-10 + +[See commits](https://github.com/rclone/rclone/compare/v1.65.0...v1.66.0) + +* Major features + * Rclone will now sync directory modification times if the backend supports it. + * This can be disabled with [--no-update-dir-modtime](https://rclone.org/docs/#no-update-dir-modtime) + * See [the overview](https://rclone.org/overview/#features) and look for the `D` flags in the `ModTime` column to see which backends support it. + * Rclone will now sync directory metadata if the backend supports it when `-M`/`--metadata` is in use. + * See [the overview](https://rclone.org/overview/#features) and look for the `D` flags in the `Metadata` column to see which backends support it. + * Bisync has received many updates see below for more details or [bisync's changelog](https://rclone.org/bisync/#changelog) +* Removed backends + * amazonclouddrive: Remove Amazon Drive backend code and docs (Nick Craig-Wood) +* New Features + * backend + * Add description field for all backends (Paul Stern) + * build + * Update to go1.22 and make go1.20 the minimum required version (Nick Craig-Wood) + * Fix `CVE-2024-24786` by upgrading `google.golang.org/protobuf` (Nick Craig-Wood) + * check: Respect `--no-unicode-normalization` and `--ignore-case-sync` for `--checkfile` (nielash) + * cmd: Much improved shell auto completion which reduces the size of the completion file and works faster (Nick Craig-Wood) + * doc updates (albertony, ben-ba, Eli, emyarod, huajin tong, Jack Provance, kapitainsky, keongalvin, Nick Craig-Wood, nielash, rarspace01, rzitzer, Tera, Vincent Murphy) + * fs: Add more detailed logging for file includes/excludes (Kyle Reynolds) + * lsf + * Add `--time-format` flag (nielash) + * Make metadata appear for directories (Nick Craig-Wood) + * lsjson: Make metadata appear for directories (Nick Craig-Wood) + * rc + * Add `srcFs` and `dstFs` to `core/stats` and `core/transferred` stats (Nick Craig-Wood) + * Add `operations/hashsum` to the rc as `rclone hashsum` equivalent (Nick Craig-Wood) + * Add `config/paths` to the rc as `rclone config paths` equivalent (Nick Craig-Wood) + * sync + * Optionally report list of synced paths to file (nielash) + * Implement directory sync for mod times and metadata (Nick Craig-Wood) + * Don't set directory modtimes if already set (nielash) + * Don't sync directory modtimes from backends which don't have directories (Nick Craig-Wood) +* Bug Fixes + * backend + * Make backends which use oauth implement the `Shutdown` and shutdown the oauth properly (rkonfj) + * bisync + * Handle unicode and case normalization consistently (nielash) + * Partial uploads known issue on `local`/`ftp`/`sftp` has been resolved (unless using `--inplace`) (nielash) + * Fixed handling of unicode normalization and case insensitivity, support for [`--fix-case`](https://rclone.org/docs/#fix-case), [`--ignore-case-sync`](/docs/#ignore-case-sync), [`--no-unicode-normalization`](/docs/#no-unicode-normalization) (nielash) + * Bisync no longer fails to find the correct listing file when configs are overridden with backend-specific flags. (nielash) + * nfsmount + * Fix exit after external unmount (nielash) + * Fix `--volname` being ignored (nielash) + * operations + * Fix renaming a file on macOS (nielash) + * Fix case-insensitive moves in operations.Move (nielash) + * Fix TestCaseInsensitiveMoveFileDryRun on chunker integration tests (nielash) + * Fix TestMkdirModTime test (Nick Craig-Wood) + * Fix TestSetDirModTime for backends with SetDirModTime but not Metadata (Nick Craig-Wood) + * Fix typo in log messages (nielash) + * serve nfs: Fix writing files via Finder on macOS (nielash) + * serve restic: Fix error handling (Michael Eischer) + * serve webdav: Fix `--baseurl` without leading / (Nick Craig-Wood) + * stats: Fix race between ResetCounters and stopAverageLoop called from time.AfterFunc (Nick Craig-Wood) + * sync + * `--fix-case` flag to rename case insensitive dest (nielash) + * Use operations.DirMove instead of sync.MoveDir for `--fix-case` (nielash) + * systemd: Fix detection and switch to the coreos package everywhere rather than having 2 separate libraries (Anagh Kumar Baranwal) +* Mount + * Fix macOS not noticing errors with `--daemon` (Nick Craig-Wood) + * Notice daemon dying much quicker (Nick Craig-Wood) +* VFS + * Fix unicode normalization on macOS (nielash) +* Bisync + * Copies and deletes are now handled in one operation instead of two (nielash) + * `--track-renames` and `--backup-dir` are now supported (nielash) + * Final listings are now generated from sync results, to avoid needing to re-list (nielash) + * Bisync is now much more resilient to changes that happen during a bisync run, and far less prone to critical errors / undetected changes (nielash) + * Bisync is now capable of rolling a file listing back in cases of uncertainty, essentially marking the file as needing to be rechecked next time. (nielash) + * A few basic terminal colors are now supported, controllable with [`--color`](https://rclone.org/docs/#color-when) (`AUTO`|`NEVER`|`ALWAYS`) (nielash) + * Initial listing snapshots of Path1 and Path2 are now generated concurrently, using the same "march" infrastructure as `check` and `sync`, for performance improvements and less risk of error. (nielash) + * `--resync` is now much more efficient (especially for users of `--create-empty-src-dirs`) (nielash) + * Google Docs (and other files of unknown size) are now supported (with the same options as in `sync`) (nielash) + * Equality checks before a sync conflict rename now fall back to `cryptcheck` (when possible) or `--download`, (nielash) +instead of of `--size-only`, when `check` is not available. + * Bisync now fully supports comparing based on any combination of size, modtime, and checksum, lifting the prior restriction on backends without modtime support. (nielash) + * Bisync now supports a "Graceful Shutdown" mode to cleanly cancel a run early without requiring `--resync`. (nielash) + * New `--recover` flag allows robust recovery in the event of interruptions, without requiring `--resync`. (nielash) + * A new `--max-lock` setting allows lock files to automatically renew and expire, for better automatic recovery when a run is interrupted. (nielash) + * Bisync now supports auto-resolving sync conflicts and customizing rename behavior with new [`--conflict-resolve`](#conflict-resolve), [`--conflict-loser`](#conflict-loser), and [`--conflict-suffix`](#conflict-suffix) flags. (nielash) + * A new [`--resync-mode`](#resync-mode) flag allows more control over which version of a file gets kept during a `--resync`. (nielash) + * Bisync now supports [`--retries`](https://rclone.org/docs/#retries-int) and [`--retries-sleep`](/docs/#retries-sleep-time) (when [`--resilient`](#resilient) is set.) (nielash) + * Clarify file operation directions in dry-run logs (Kyle Reynolds) +* Local + * Fix cleanRootPath on Windows after go1.21.4 stdlib update (nielash) + * Implement setting modification time on directories (nielash) + * Implement modtime and metadata for directories (Nick Craig-Wood) + * Fix setting of btime on directories on Windows (Nick Craig-Wood) + * Delete backend implementation of Purge to speed up and make stats (Nick Craig-Wood) + * Support metadata setting and mapping on server side Move (Nick Craig-Wood) +* Cache + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) +* Crypt + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) + * Improve handling of undecryptable file names (nielash) + * Add missing error check spotted by linter (Nick Craig-Wood) +* Azure Blob + * Implement `--azureblob-delete-snapshots` (Nick Craig-Wood) +* B2 + * Clarify exactly what `--b2-download-auth-duration` does in the docs (Nick Craig-Wood) +* Chunker + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) +* Combine + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) + * Fix directory metadata error on upstream root (nielash) + * Fix directory move across upstreams (nielash) +* Compress + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) +* Drive + * Implement setting modification time on directories (nielash) + * Implement modtime and metadata setting for directories (Nick Craig-Wood) + * Support metadata setting and mapping on server side Move,Copy (Nick Craig-Wood) +* FTP + * Fix mkdir with rsftp which is returning the wrong code (Nick Craig-Wood) +* Hasher + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) + * Fix error from trying to stop an already-stopped db (nielash) + * Look for cached hash if passed hash unexpectedly blank (nielash) +* Imagekit + * Updated docs and web content (Harshit Budhraja) + * Updated overview - supported operations (Harshit Budhraja) +* Mega + * Fix panic with go1.22 (Nick Craig-Wood) +* Netstorage + * Fix Root to return correct directory when pointing to a file (Nick Craig-Wood) +* Onedrive + * Add metadata support (nielash) +* Opendrive + * Fix moving file/folder within the same parent dir (nielash) +* Oracle Object Storage + * Support `backend restore` command (Nikhil Ahuja) + * Support workload identity authentication for OKE (Anders Swanson) +* Protondrive + * Fix encoding of Root method (Nick Craig-Wood) +* Quatrix + * Fix `Content-Range` header (Volodymyr) + * Add option to skip project folders (Oksana Zhykina) + * Fix Root to return correct directory when pointing to a file (Nick Craig-Wood) +* S3 + * Add `--s3-version-deleted` to show delete markers in listings when using versions. (Nick Craig-Wood) + * Add IPv6 support with option `--s3-use-dual-stack` (Anthony Metzidis) + * Copy parts in parallel when doing chunked server side copy (Nick Craig-Wood) + * GCS provider: fix server side copy of files bigger than 5G (Nick Craig-Wood) + * Support metadata setting and mapping on server side Copy (Nick Craig-Wood) +* Seafile + * Fix download/upload error when `FILE_SERVER_ROOT` is relative (DanielEgbers) + * Fix Root to return correct directory when pointing to a file (Nick Craig-Wood) +* SFTP + * Implement setting modification time on directories (nielash) + * Set directory modtimes update on write flag (Nick Craig-Wood) + * Shorten wait delay for external ssh binaries now that we are using go1.20 (Nick Craig-Wood) +* Swift + * Avoid unnecessary container versioning check (Joe Cai) +* Union + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) +* WebDAV + * Reduce priority of chunks upload log (Gabriel Ramos) + * owncloud: Add config `owncloud_exclude_shares` which allows to exclude shared files and folders when listing remote resources (Thomas Müller) + +## v1.65.2 - 2024-01-24 + +[See commits](https://github.com/rclone/rclone/compare/v1.65.1...v1.65.2) + +* Bug Fixes + * build: bump github.com/cloudflare/circl from 1.3.6 to 1.3.7 (dependabot) + * docs updates (Nick Craig-Wood, kapitainsky, nielash, Tera, Harshit Budhraja) +* VFS + * Fix stale data when using `--vfs-cache-mode` full (Nick Craig-Wood) +* Azure Blob + * **IMPORTANT** Fix data corruption bug - see [#7590](https://github.com/rclone/rclone/issues/7590) (Nick Craig-Wood) + +## v1.65.1 - 2024-01-08 + +[See commits](https://github.com/rclone/rclone/compare/v1.65.0...v1.65.1) + +* Bug Fixes + * build + * Bump golang.org/x/crypto to fix ssh terrapin CVE-2023-48795 (dependabot) + * Update to go1.21.5 to fix Windows path problems (Nick Craig-Wood) + * Fix docker build on arm/v6 (Nick Craig-Wood) + * install.sh: fix harmless error message on install (Nick Craig-Wood) + * accounting: fix stats to show server side transfers (Nick Craig-Wood) + * doc fixes (albertony, ben-ba, Eli Orzitzer, emyarod, keongalvin, rarspace01) + * nfsmount: Compile for all unix oses, add `--sudo` and fix error/option handling (Nick Craig-Wood) + * operations: Fix files moved by rclone move not being counted as transfers (Nick Craig-Wood) + * oauthutil: Avoid panic when `*token` and `*ts.token` are the same (rkonfj) + * serve s3: Fix listing oddities (Nick Craig-Wood) +* VFS + * Note that `--vfs-refresh` runs in the background (Nick Craig-Wood) +* Azurefiles + * Fix storage base url (Oksana) +* Crypt + * Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +* Chunker + * Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +* Compress + * Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +* Dropbox + * Fix used space on dropbox team accounts (Nick Craig-Wood) +* FTP + * Fix multi-thread copy (WeidiDeng) +* Googlephotos + * Fix nil pointer exception when batch failed (Nick Craig-Wood) +* Hasher + * Fix rclone move a file over itself deleting the file (Nick Craig-Wood) + * Fix invalid memory address error when MaxAge == 0 (nielash) +* Onedrive + * Fix error listing: unknown object type `` (Nick Craig-Wood) + * Fix "unauthenticated: Unauthenticated" errors when uploading (Nick Craig-Wood) +* Oracleobjectstorage + * Fix object storage endpoint for custom endpoints (Manoj Ghosh) + * Multipart copy create bucket if it doesn't exist. (Manoj Ghosh) +* Protondrive + * Fix CVE-2023-45286 / GHSA-xwh9-gc39-5298 (Nick Craig-Wood) +* S3 + * Fix crash if no UploadId in multipart upload (Nick Craig-Wood) +* Smb + * Fix shares not listed by updating go-smb2 (halms) +* Union + * Fix rclone move a file over itself deleting the file (Nick Craig-Wood) + ## v1.65.0 - 2023-11-26 [See commits](https://github.com/rclone/rclone/compare/v1.64.0...v1.65.0) @@ -54432,10 +57293,13 @@ Point release to fix hubic and azureblob backends. ## Limitations -### Directory timestamps aren't preserved +### Directory timestamps aren't preserved on some backends -Rclone doesn't currently preserve the timestamps of directories. This -is because rclone only really considers objects when syncing. +As of `v1.66`, rclone supports syncing directory modtimes, if the backend +supports it. Some backends do not support it -- see +[overview](https://rclone.org/overview/) for a complete list. Additionally, note +that empty directories are not synced by default (this can be enabled with +`--create-empty-src-dirs`.) ### Rclone struggles with millions of files in a directory/bucket @@ -54799,7 +57663,7 @@ put them back in again.` >}} * Scott McGillivray * Bjørn Erik Pedersen * Lukas Loesche - * emyarod + * emyarod * T.C. Ferguson * Brandur * Dario Giovannetti @@ -55547,6 +58411,27 @@ put them back in again.` >}} * Alen Šiljak * 你知道未来吗 * Abhinav Dhiman <8640877+ahnv@users.noreply.github.com> + * halms <7513146+halms@users.noreply.github.com> + * ben-ba + * Eli Orzitzer + * Anthony Metzidis + * emyarod + * keongalvin + * rarspace01 + * Paul Stern + * Nikhil Ahuja + * Harshit Budhraja <52413945+harshit-budhraja@users.noreply.github.com> + * Tera <24725862+teraa@users.noreply.github.com> + * Kyle Reynolds + * Michael Eischer + * Thomas Müller <1005065+DeepDiver1975@users.noreply.github.com> + * DanielEgbers <27849724+DanielEgbers@users.noreply.github.com> + * Jack Provance <49460795+njprov@users.noreply.github.com> + * Gabriel Ramos <109390599+gabrielramos02@users.noreply.github.com> + * Dan McArdle + * Joe Cai + * Anders Swanson + * huajin tong <137764712+thirdkeyword@users.noreply.github.com> # Contact the rclone project diff --git a/MANUAL.txt b/MANUAL.txt index 11a38590b..65aafc533 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Nov 26, 2023 +Mar 10, 2024 Rclone syncs your files to cloud storage @@ -79,6 +79,7 @@ Features - Can use multi-threaded downloads to local disk - Copy new or changed files to cloud storage - Sync (one way) to make a directory identical +- Bisync (two way) to keep two directories in sync bidirectionally - Move files to cloud storage deleting the local after verification - Check hashes and for missing/extra files - Mount your cloud storage as a network disk @@ -93,7 +94,6 @@ S3, that work out of the box.) - 1Fichier - Akamai Netstorage - Alibaba Cloud (Aliyun) Object Storage System (OSS) -- Amazon Drive - Amazon S3 - Backblaze B2 - Box @@ -116,6 +116,7 @@ S3, that work out of the box.) - Hetzner Storage Box - HiDrive - HTTP +- ImageKit - Internet Archive - Jottacloud - IBM COS S3 @@ -820,7 +821,6 @@ See the following for detailed instructions for - 1Fichier - Akamai Netstorage - Alias -- Amazon Drive - Amazon S3 - Backblaze B2 - Box @@ -998,6 +998,14 @@ recently very efficiently like this: rclone copy --max-age 24h --no-traverse /path/to/src remote: +Rclone will sync the modification times of files and directories if the +backend supports it. If metadata syncing is required then use the +--metadata flag. + +Note that the modification time and metadata for the root directory will +not be synced. See https://github.com/rclone/rclone/issues/7652 for more +info. + Note: Use the -P/--progress flag to view real-time transfer statistics. Note: Use the --dry-run or the --interactive/-i flag to test without @@ -1023,7 +1031,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -1037,6 +1045,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -1129,18 +1138,85 @@ the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory. +Rclone will sync the modification times of files and directories if the +backend supports it. If metadata syncing is required then use the +--metadata flag. + +Note that the modification time and metadata for the root directory will +not be synced. See https://github.com/rclone/rclone/issues/7652 for more +info. + Note: Use the -P/--progress flag to view real-time transfer statistics Note: Use the rclone dedupe command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See this forum post for more info. +Logger Flags + +The --differ, --missing-on-dst, --missing-on-src, --match and --error +flags write paths, one per line, to the file name (or stdout if it is -) +supplied. What they write is described in the help below. For example +--differ will write all paths which are present on both the source and +destination but different. + +The --combined flag will write a file (or stdout) which contains all +file paths with a symbol and then a space and then the path to tell you +what happened to it. These are reminiscent of diff files. + +- = path means path was found in source and destination and was + identical +- `- path` means path was missing on the source, so only in the + destination +- `+ path` means path was missing on the destination, so only in the + source +- `* path` means path was present in source and destination but + different. +- ! path means there was an error reading or hashing the source or + dest. + +The --dest-after flag writes a list file using the same format flags as +lsf (including customizable options for hash, modtime, etc.) +Conceptually it is similar to rsync's --itemize-changes, but not +identical -- it should output an accurate list of what will be on the +destination after the sync. + +Note that these logger flags have a few limitations, and certain +scenarios are not currently supported: + +- --max-duration / CutoffModeHard +- --compare-dest / --copy-dest +- server-side moves of an entire dir at once +- High-level retries, because there would be duplicates (use + --retries 1 to disable) +- Possibly some unusual error scenarios + +Note also that each file is logged during the sync, as opposed to after, +so it is most useful as a predictor of what SHOULD happen to each file +(which may or may not match what actually DID.) + rclone sync source:path dest:path [flags] Options + --absolute Put a leading / in front of path names + --combined string Make a combined report of changes to this file --create-empty-src-dirs Create empty source dirs on destination after sync + --csv Output in CSV format + --dest-after string Report all files that exist on the dest post-sync + --differ string Report all non-matching files to this file + -d, --dir-slash Append a slash to directory names (default true) + --dirs-only Only list directories + --error string Report all files with errors (hashing or reading) to this file + --files-only Only list files (default true) + -F, --format string Output format - see lsf help for details (default "p") + --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") -h, --help help for sync + --match string Report all matching files to this file + --missing-on-dst string Report all files missing from the destination to this file + --missing-on-src string Report all files missing from the source to this file + -s, --separator string Separator for the items in the format (default ";") + -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) Copy Options @@ -1155,7 +1231,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -1169,6 +1245,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -1186,6 +1263,7 @@ Flags just used for rclone sync. --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -1269,6 +1347,14 @@ See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. +Rclone will sync the modification times of files and directories if the +backend supports it. If metadata syncing is required then use the +--metadata flag. + +Note that the modification time and metadata for the root directory will +not be synced. See https://github.com/rclone/rclone/issues/7652 for more +info. + Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag. @@ -1295,7 +1381,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -1309,6 +1395,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -2537,26 +2624,42 @@ each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New, Newer, Older, and Deleted files. - Propagate changes on Path1 to Path2, and vice-versa. +Bisync is in beta and is considered an advanced command, so use with +care. Make sure you have read and understood the entire manual +(especially the Limitations section) before using, or data loss can +result. Questions can be asked in the Rclone Forum. + See full bisync description for details. rclone bisync remote1:path1 remote2:path2 [flags] Options - --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. - --check-filename string Filename for --check-access (default: RCLONE_TEST) - --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") - --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) - --filters-file string Read filtering patterns from a file - --force Bypass --max-delete safety check and run the sync. Consider using with --verbose - -h, --help help for bisync - --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --remove-empty-dirs Remove ALL empty directories at the final cleanup step. - --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! - -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. - --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) + --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote. + --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote. + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --compare string Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime') + --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num) + --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none") + --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict') + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --no-slow-hash Ignore listing checksums only on backends where they are slow + --recover Automatically recover from interruptions without requiring --resync. + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first. + --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none") + --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls. + --workdir string Use custom working dir - useful for testing. (default: {WORKDIR}) Copy Options @@ -2571,7 +2674,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -2585,6 +2688,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -3550,7 +3654,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -3564,6 +3668,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -3623,7 +3728,7 @@ SEE ALSO rclone copyurl -Copy url content to dest. +Copy the contents of the URL supplied content to dest:path. Synopsis @@ -3632,10 +3737,11 @@ it in temporary storage. Setting --auto-filename will attempt to automatically determine the filename from the URL (after any redirections) and used in the -destination path. With --auto-filename-header in addition, if a specific -filename is set in HTTP headers, it will be used instead of the name -from the URL. With --print-filename in addition, the resulting file name -will be printed. +destination path. + +With --auto-filename-header in addition, if a specific filename is set +in HTTP headers, it will be used instead of the name from the URL. With +--print-filename in addition, the resulting file name will be printed. Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name. @@ -3643,6 +3749,19 @@ there is one with the same name. Setting --stdout or making the output file name - will cause the output to be written to standard output. +Troublshooting + +If you can't get rclone copyurl to work then here are some things you +can try: + +- --disable-http2 rclone will use HTTP2 if available - try disabling + it +- --bind 0.0.0.0 rclone will use IPv6 if available - try disabling it +- --bind ::0 to disable IPv4 +- --user agent curl - some sites have whitelists for curl's + user-agent - try that +- Make sure the site works with curl directly + rclone copyurl https://example.com dest:path [flags] Options @@ -4139,14 +4258,15 @@ Synopsis rclone listremotes lists all the available remotes from the config file. -When used with the --long flag it lists the types too. +When used with the --long flag it lists the types and the descriptions +too. rclone listremotes [flags] Options -h, --help help for listremotes - --long Show the type as well as names + --long Show the type and the description as well as names See the global flags page for global options not listed here. @@ -4253,6 +4373,20 @@ those only (without traversing the whole directory structure): rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone copy --files-from-raw new_files /path/to/local remote:path +The default time format is '2006-01-02 15:04:05'. Other formats can be +specified with the --time-format flag. Examples: + + rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)' + rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000' + rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00' + rclone lsf remote:path --format pt --time-format RFC3339 + rclone lsf remote:path --format pt --time-format DateOnly + rclone lsf remote:path --format pt --time-format max + +--time-format max will automatically truncate +'2006-01-02 15:04:05.000000000' to the maximum precision supported by +the remote. + Any of the filtering options can be applied to this command. There are several related list commands @@ -4280,16 +4414,17 @@ bucket-based remotes). Options - --absolute Put a leading / in front of path names - --csv Output in CSV format - -d, --dir-slash Append a slash to directory names (default true) - --dirs-only Only list directories - --files-only Only list files - -F, --format string Output format - see help for details (default "p") - --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") - -h, --help help for lsf - -R, --recursive Recurse into the listing - -s, --separator string Separator for the items in the format (default ";") + --absolute Put a leading / in front of path names + --csv Output in CSV format + -d, --dir-slash Append a slash to directory names (default true) + --dirs-only Only list directories + --files-only Only list files + -F, --format string Output format - see help for details (default "p") + --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") + -h, --help help for lsf + -R, --recursive Recurse into the listing + -s, --separator string Separator for the items in the format (default ";") + -t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) Filter Options @@ -4759,6 +4894,12 @@ Mounting on macOS can be done either via built-in NFS server, macFUSE utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server. +Unicode Normalization + +It is highly recommended to keep the default of +--no-unicode-normalization=false for all mount and serve commands on +macOS. For details, see vfs-case-sensitivity. + NFS mount This method spins up an NFS server using serve nfs command and mounts it @@ -4766,6 +4907,12 @@ to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to send SIGTERM signal to the rclone process using |kill| command to stop the mount. +Note that --nfs-cache-handle-limit controls the maximum number of cached +file handles stored by the nfsmount caching handler. This should not be +set too low or you may experience errors when trying to access files. +The default is 1000000, but consider lowering this limit if the server's +system resource usage causes problems. + macFUSE Notes If installing macFUSE using dmg packages from the website, rclone will @@ -4794,14 +4941,6 @@ This means that viewing files with various tools, notably macOS Finder, will cause rlcone to update the modification time of the file. This may make rclone upload a full new copy of the file. -Unicode Normalization - -Rclone includes flags for unicode normalization with macFUSE that should -be updated for FUSE-T. See this forum post and FUSE-T issue #16. The -following flag should be added to the rclone mount command. - - -o modules=iconv,from_code=UTF-8,to_code=UTF-8 - Read Only mounts When mounting with --read-only, attempts to write to files will fail @@ -5264,6 +5403,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -5320,6 +5481,7 @@ Options --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -5332,7 +5494,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -5428,7 +5590,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -5442,6 +5604,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -5608,6 +5771,918 @@ SEE ALSO - rclone - Show help for rclone commands, flags and backends. +rclone nfsmount + +Mount the remote as file system on a mountpoint. + +Synopsis + +rclone nfsmount allows Linux, FreeBSD, macOS and Windows to mount any of +Rclone's cloud storage systems as a file system with FUSE. + +First set up your remote using rclone config. Check it works with +rclone ls etc. + +On Linux and macOS, you can run mount in either foreground or background +(aka daemon) mode. Mount runs in foreground mode by default. Use the +--daemon flag to force background mode. On Windows you can run mount in +foreground only, the flag is ignored. + +In background mode rclone acts as a generic Unix mount program: the main +program starts, spawns background rclone process to setup and maintain +the mount, waits until success or timeout and exits with appropriate +code (killing the child process if it fails). + +On Linux/macOS/FreeBSD start the mount like this, where +/path/to/local/mount is an empty existing directory: + + rclone nfsmount remote:path/to/files /path/to/local/mount + +On Windows you can start a mount in different ways. See below for +details. If foreground mount is used interactively from a console +window, rclone will serve the mount and occupy the console so another +window should be used to work with the mount until rclone is interrupted +e.g. by pressing Ctrl-C. + +The following examples will mount to an automatically assigned drive, to +specific drive letter X:, to path C:\path\parent\mount (where parent +directory or drive must exist, and mount must not exist, and is not +supported when mounting as a network drive), and the last example will +mount as network share \\cloud\remote and map it to an automatically +assigned drive: + + rclone nfsmount remote:path/to/files * + rclone nfsmount remote:path/to/files X: + rclone nfsmount remote:path/to/files C:\path\parent\mount + rclone nfsmount remote:path/to/files \\cloud\remote + +When the program ends while in foreground mode, either via Ctrl+C or +receiving a SIGINT or SIGTERM signal, the mount should be automatically +stopped. + +When running in background mode the user will have to stop the mount +manually: + + # Linux + fusermount -u /path/to/local/mount + # OS X + umount /path/to/local/mount + +The umount operation can fail, for example when the mountpoint is busy. +When that happens, it is the user's responsibility to stop the mount +manually. + +The size of the mounted file system will be set according to information +retrieved from the remote, the same as returned by the rclone about +command. Remotes with unlimited storage may report the used size only, +then an additional 1 PiB of free space is assumed. If the remote does +not support the about feature at all, then 1 PiB is set as both the +total and the free size. + +Installing on Windows + +To run rclone nfsmount on Windows, you will need to download and install +WinFsp. + +WinFsp is an open-source Windows File System Proxy which makes it easy +to write user space file systems for Windows. It provides a FUSE +emulation layer which rclone uses combination with cgofuse. Both of +these packages are by Bill Zissimopoulos who was very helpful during the +implementation of rclone nfsmount for Windows. + +Mounting modes on windows + +Unlike other operating systems, Microsoft Windows provides a different +filesystem type for network and fixed drives. It optimises access on the +assumption fixed disk drives are fast and reliable, while network drives +have relatively high latency and less reliability. Some settings can +also be differentiated between the two types, for example that Windows +Explorer should just display icons and not create preview thumbnails for +image and video files on network drives. + +In most cases, rclone will mount the remote as a normal, fixed disk +drive by default. However, you can also choose to mount it as a remote +network drive, often described as a network share. If you mount an +rclone remote using the default, fixed drive mode and experience +unexpected program errors, freezes or other issues, consider mounting as +a network drive instead. + +When mounting as a fixed disk drive you can either mount to an unused +drive letter, or to a path representing a nonexistent subdirectory of an +existing parent directory or drive. Using the special value * will tell +rclone to automatically assign the next available drive letter, starting +with Z: and moving backward. Examples: + + rclone nfsmount remote:path/to/files * + rclone nfsmount remote:path/to/files X: + rclone nfsmount remote:path/to/files C:\path\parent\mount + rclone nfsmount remote:path/to/files X: + +Option --volname can be used to set a custom volume name for the mounted +file system. The default is to use the remote name and path. + +To mount as network drive, you can add option --network-mode to your +nfsmount command. Mounting to a directory path is not supported in this +mode, it is a limitation Windows imposes on junctions, so the remote +must always be mounted to a drive letter. + + rclone nfsmount remote:path/to/files X: --network-mode + +A volume name specified with --volname will be used to create the +network share path. A complete UNC path, such as \\cloud\remote, +optionally with path \\cloud\remote\madeup\path, will be used as is. Any +other string will be used as the share part, after a default prefix +\\server\. If no volume name is specified then \\server\share will be +used. You must make sure the volume name is unique when you are mounting +more than one drive, or else the mount command will fail. The share name +will treated as the volume label for the mapped drive, shown in Windows +Explorer etc, while the complete \\server\share will be reported as the +remote UNC path by net use etc, just like a normal network drive +mapping. + +If you specify a full network share UNC path with --volname, this will +implicitly set the --network-mode option, so the following two examples +have same result: + + rclone nfsmount remote:path/to/files X: --network-mode + rclone nfsmount remote:path/to/files X: --volname \\server\share + +You may also specify the network share UNC path as the mountpoint +itself. Then rclone will automatically assign a drive letter, same as +with * and use that as mountpoint, and instead use the UNC path +specified as the volume name, as if it were specified with the --volname +option. This will also implicitly set the --network-mode option. This +means the following two examples have same result: + + rclone nfsmount remote:path/to/files \\cloud\remote + rclone nfsmount remote:path/to/files * --volname \\cloud\remote + +There is yet another way to enable network mode, and to set the share +path, and that is to pass the "native" libfuse/WinFsp option directly: +--fuse-flag --VolumePrefix=\server\share. Note that the path must be +with just a single backslash prefix in this case. + +Note: In previous versions of rclone this was the only supported method. + +Read more about drive mapping + +See also Limitations section below. + +Windows filesystem permissions + +The FUSE emulation layer on Windows must convert between the POSIX-based +permission model used in FUSE, and the permission model used in Windows, +based on access-control lists (ACL). + +The mounted filesystem will normally get three entries in its +access-control list (ACL), representing permissions for the POSIX +permission scopes: Owner, group and others. By default, the owner and +group will be taken from the current user, and the built-in group +"Everyone" will be used to represent others. The user/group can be +customized with FUSE options "UserName" and "GroupName", e.g. +-o UserName=user123 -o GroupName="Authenticated Users". The permissions +on each entry will be set according to options --dir-perms and +--file-perms, which takes a value in traditional Unix numeric notation. + +The default permissions corresponds to +--file-perms 0666 --dir-perms 0777, i.e. read and write permissions to +everyone. This means you will not be able to start any programs from the +mount. To be able to do that you must add execute permissions, e.g. +--file-perms 0777 --dir-perms 0777 to add it to everyone. If the program +needs to write files, chances are you will have to enable VFS File +Caching as well (see also limitations). Note that the default write +permission have some restrictions for accounts other than the owner, +specifically it lacks the "write extended attributes", as explained +next. + +The mapping of permissions is not always trivial, and the result you see +in Windows Explorer may not be exactly like you expected. For example, +when setting a value that includes write access for the group or others +scope, this will be mapped to individual permissions "write attributes", +"write data" and "append data", but not "write extended attributes". +Windows will then show this as basic permission "Special" instead of +"Write", because "Write" also covers the "write extended attributes" +permission. When setting digit 0 for group or others, to indicate no +permissions, they will still get individual permissions "read +attributes", "read extended attributes" and "read permissions". This is +done for compatibility reasons, e.g. to allow users without additional +permissions to be able to read basic metadata about files like in Unix. + +WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity", +that allows the complete specification of file security descriptors +using SDDL. With this you get detailed control of the resulting +permissions, compared to use of the POSIX permissions described above, +and no additional permissions will be added automatically for +compatibility with Unix. Some example use cases will following. + +If you set POSIX permissions for only allowing access to the owner, +using --file-perms 0600 --dir-perms 0700, the user group and the +built-in "Everyone" group will still be given some special permissions, +as described above. Some programs may then (incorrectly) interpret this +as the file being accessible by everyone, for example an SSH client may +warn about "unprotected private key file". You can work around this by +specifying -o FileSecurity="D:P(A;;FA;;;OW)", which sets file all access +(FA) to the owner (OW), and nothing else. + +When setting write permissions then, except for the owner, this does not +include the "write extended attributes" permission, as mentioned above. +This may prevent applications from writing to files, giving permission +denied error instead. To set working write permissions for the built-in +"Everyone" group, similar to what it gets by default but with the +addition of the "write extended attributes", you can specify +-o FileSecurity="D:P(A;;FRFW;;;WD)", which sets file read (FR) and file +write (FW) to everyone (WD). If file execute (FX) is also needed, then +change to -o FileSecurity="D:P(A;;FRFWFX;;;WD)", or set file all access +(FA) to get full access permissions, including delete, with +-o FileSecurity="D:P(A;;FA;;;WD)". + +Windows caveats + +Drives created as Administrator are not visible to other accounts, not +even an account that was elevated to Administrator with the User Account +Control (UAC) feature. A result of this is that if you mount to a drive +letter from a Command Prompt run as Administrator, and then try to +access the same drive from Windows Explorer (which does not run as +Administrator), you will not be able to see the mounted drive. + +If you don't need to access the drive from applications running with +administrative privileges, the easiest way around this is to always +create the mount from a non-elevated command prompt. + +To make mapped drives available to the user account that created them +regardless if elevated or not, there is a special Windows setting called +linked connections that can be enabled. + +It is also possible to make a drive mount available to everyone on the +system, by running the process creating it as the built-in SYSTEM +account. There are several ways to do this: One is to use the +command-line utility PsExec, from Microsoft's Sysinternals suite, which +has option -s to start processes as the SYSTEM account. Another +alternative is to run the mount command from a Windows Scheduled Task, +or a Windows Service, configured to run as the SYSTEM account. A third +alternative is to use the WinFsp.Launcher infrastructure). Read more in +the install documentation. Note that when running rclone as another +user, it will not use the configuration file from your profile unless +you tell it to with the --config option. Note also that it is now the +SYSTEM account that will have the owner permissions, and other accounts +will have permissions according to the group or others scopes. As +mentioned above, these will then not get the "write extended attributes" +permission, and this may prevent writing to files. You can work around +this with the FileSecurity option, see example above. + +Note that mapping to a directory path, instead of a drive letter, does +not suffer from the same limitations. + +Mounting on macOS + +Mounting on macOS can be done either via built-in NFS server, macFUSE +(also known as osxfuse) or FUSE-T. macFUSE is a traditional FUSE driver +utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE +system which "mounts" via an NFSv4 local server. + +Unicode Normalization + +It is highly recommended to keep the default of +--no-unicode-normalization=false for all mount and serve commands on +macOS. For details, see vfs-case-sensitivity. + +NFS mount + +This method spins up an NFS server using serve nfs command and mounts it +to the specified mountpoint. If you run this in background mode using +|--daemon|, you will need to send SIGTERM signal to the rclone process +using |kill| command to stop the mount. + +Note that --nfs-cache-handle-limit controls the maximum number of cached +file handles stored by the nfsmount caching handler. This should not be +set too low or you may experience errors when trying to access files. +The default is 1000000, but consider lowering this limit if the server's +system resource usage causes problems. + +macFUSE Notes + +If installing macFUSE using dmg packages from the website, rclone will +locate the macFUSE libraries without any further intervention. If +however, macFUSE is installed using the macports package manager, the +following addition steps are required. + + sudo mkdir /usr/local/lib + cd /usr/local/lib + sudo ln -s /opt/local/lib/libfuse.2.dylib + +FUSE-T Limitations, Caveats, and Notes + +There are some limitations, caveats, and notes about how it works. These +are current as of FUSE-T version 1.0.14. + +ModTime update on read + +As per the FUSE-T wiki: + + File access and modification times cannot be set separately as it + seems to be an issue with the NFS client which always modifies both. + Can be reproduced with 'touch -m' and 'touch -a' commands + +This means that viewing files with various tools, notably macOS Finder, +will cause rlcone to update the modification time of the file. This may +make rclone upload a full new copy of the file. + +Read Only mounts + +When mounting with --read-only, attempts to write to files will fail +silently as opposed to with a clear warning as in macFUSE. + +Limitations + +Without the use of --vfs-cache-mode this can only write files +sequentially, it can only seek when reading. This means that many +applications won't work with their files on an rclone mount without +--vfs-cache-mode writes or --vfs-cache-mode full. See the VFS File +Caching section for more info. When using NFS mount on macOS, if you +don't specify |--vfs-cache-mode| the mount point will be read-only. + +The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) do +not support the concept of empty directories, so empty directories will +have a tendency to disappear once they fall out of the directory cache. + +When rclone mount is invoked on Unix with --daemon flag, the main rclone +program will wait for the background mount to become ready or until the +timeout specified by the --daemon-wait flag. On Linux it can check mount +status using ProcFS so the flag in fact sets maximum time to wait, while +the real wait can be less. On macOS / BSD the time to wait is constant +and the check is performed only at the end. We advise you to set wait +time on macOS reasonably. + +Only supported on Linux, FreeBSD, OS X and Windows at the moment. + +rclone nfsmount vs rclone sync/copy + +File systems expect things to be 100% reliable, whereas cloud storage +systems are a long way from 100% reliable. The rclone sync/copy commands +cope with this with lots of retries. However rclone nfsmount can't use +retries in the same way without making local copies of the uploads. Look +at the VFS File Caching for solutions to make nfsmount more reliable. + +Attribute caching + +You can use the flag --attr-timeout to set the time the kernel caches +the attributes (size, modification time, etc.) for directory entries. + +The default is 1s which caches files just long enough to avoid too many +callbacks to rclone from the kernel. + +In theory 0s should be the correct value for filesystems which can +change outside the control of the kernel. However this causes quite a +few problems such as rclone using too much memory, rclone not serving +files to samba and excessive time listing directories. + +The kernel can cache the info about a file for the time given by +--attr-timeout. You may see corruption if the remote file changes length +during this window. It will show up as either a truncated file or a file +with garbage on the end. With --attr-timeout 1s this is very unlikely +but not impossible. The higher you set --attr-timeout the more likely it +is. The default setting of "1s" is the lowest setting which mitigates +the problems above. + +If you set it higher (10s or 1m say) then the kernel will call back to +rclone less often making it more efficient, however there is more chance +of the corruption issue above. + +If files don't change on the remote outside of the control of rclone +then there is no chance of corruption. + +This is the same as setting the attr_timeout option in mount.fuse. + +Filters + +Note that all the rclone filters can be used to select a subset of the +files to be visible in the mount. + +systemd + +When running rclone nfsmount as a systemd service, it is possible to use +Type=notify. In this case the service will enter the started state after +the mountpoint has been successfully set up. Units having the rclone +nfsmount service specified as a requirement will see all files and +folders immediately in this mode. + +Note that systemd runs mount units without any environment variables +including PATH or HOME. This means that tilde (~) expansion will not +work and you should provide --config and --cache-dir explicitly as +absolute paths via rclone arguments. Since mounting requires the +fusermount program, rclone will use the fallback PATH of /bin:/usr/bin +in this scenario. Please ensure that fusermount is present on this PATH. + +Rclone as Unix mount helper + +The core Unix program /bin/mount normally takes the -t FSTYPE argument +then runs the /sbin/mount.FSTYPE helper program passing it mount options +as -o key=val,... or --opt=.... Automount (classic or systemd) behaves +in a similar way. + +rclone by default expects GNU-style flags --key val. To run it as a +mount helper you should symlink rclone binary to /sbin/mount.rclone and +optionally /usr/bin/rclonefs, e.g. +ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and +translate command-line arguments appropriately. + +Now you can run classic mounts like this: + + mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem + +or create systemd mount units: + + # /etc/systemd/system/mnt-data.mount + [Unit] + Description=Mount for /mnt/data + [Mount] + Type=rclone + What=sftp1:subdir + Where=/mnt/data + Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone + +optionally accompanied by systemd automount unit + + # /etc/systemd/system/mnt-data.automount + [Unit] + Description=AutoMount for /mnt/data + [Automount] + Where=/mnt/data + TimeoutIdleSec=600 + [Install] + WantedBy=multi-user.target + +or add in /etc/fstab a line like + + sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0 + +or use classic Automountd. Remember to provide explicit +config=...,cache-dir=... as a workaround for mount units being run +without HOME. + +Rclone in the mount helper mode will split -o argument(s) by comma, +replace _ by - and prepend -- to get the command-line flags. Options +containing commas or spaces can be wrapped in single or double quotes. +Any inner quotes inside outer quotes of the same type should be doubled. + +Mount option syntax includes a few extra options treated specially: + +- env.NAME=VALUE will set an environment variable for the mount + process. This helps with Automountd and Systemd.mount which don't + allow setting custom environment for mount helpers. Typically you + will use env.HTTPS_PROXY=proxy.host:3128 or env.HOME=/root +- command=cmount can be used to run cmount or any other rclone command + rather than the default mount. +- args2env will pass mount options to the mount helper running in + background via environment variables instead of command line + arguments. This allows to hide secrets from such commands as ps or + pgrep. +- vv... will be transformed into appropriate --verbose=N +- standard mount options like x-systemd.automount, _netdev, nosuid and + alike are intended only for Automountd and ignored by rclone. ## + VFS - Virtual File System + +This command uses the VFS layer. This adapts the cloud storage objects +that rclone uses into something which looks much more like a disk filing +system. + +Cloud storage objects have lots of properties which aren't like disk +files - you can't extend them or write to the middle of them, so the VFS +layer has to deal with that. Because there is no one right way of doing +this there are various options explained below. + +The VFS layer also implements a directory cache - this caches info about +files and directories (but not the data) in memory. + +VFS Directory Cache + +Using the --dir-cache-time flag, you can control how long a directory +should be considered up to date and not refreshed from the backend. +Changes made through the VFS will appear immediately or invalidate the +cache. + + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + +However, changes made directly on the cloud storage by the web interface +or a different copy of rclone will only be picked up once the directory +cache expires if the backend configured does not support polling for +changes. If the backend supports polling, changes will be picked up +within the polling interval. + +You can send a SIGHUP signal to rclone for it to flush all directory +caches, regardless of how old they are. Assuming only one rclone +instance is running, you can reset the cache like this: + + kill -SIGHUP $(pidof rclone) + +If you configure rclone with a remote control then you can use rclone rc +to flush the whole directory cache: + + rclone rc vfs/forget + +Or individual files or directories: + + rclone rc vfs/forget file=path/to/file dir=path/to/dir + +VFS File Buffering + +The --buffer-size flag determines the amount of memory, that will be +used to buffer data in advance. + +Each open file will try to keep the specified amount of data in memory +at all times. The buffered data is bound to one open file and won't be +shared. + +This flag is a upper limit for the used memory per open file. The buffer +will only use memory for data that is downloaded but not not yet read. +If the buffer is empty, only a small amount of memory will be used. + +The maximum memory used by rclone for buffering can be up to +--buffer-size * open files. + +VFS File Caching + +These flags control the VFS file caching options. File caching is +necessary to make the VFS layer appear compatible with a normal file +system. It can be disabled at the cost of some compatibility. + +For example you'll need to enable VFS caching if you want to read and +write simultaneously to a file. See below for more details. + +Note that the VFS cache is separate from the cache backend and you may +find that you need one or the other or both. + + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + +If run with -vv rclone will print the location of the file cache. The +files are stored in the user cache file area which is OS dependent but +can be controlled with --cache-dir or setting the appropriate +environment variable. + +The cache has 4 different modes selected by --vfs-cache-mode. The higher +the cache mode the more compatible rclone becomes at the cost of using +disk space. + +Note that files are written back to the remote only when they are closed +and if they haven't been accessed for --vfs-write-back seconds. If +rclone is quit or dies with files that haven't been uploaded, these will +be uploaded next time rclone is run with the same flags. + +If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the +cache may exceed these quotas for two reasons. Firstly because it is +only checked every --vfs-cache-poll-interval. Secondly because open +files cannot be evicted from the cache. When --vfs-cache-max-size or +--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the +least accessed files from the cache first. rclone will start with files +that haven't been accessed for the longest. This cache flushing strategy +is efficient and more relevant files are likely to remain cached. + +The --vfs-cache-max-age will evict files from the cache after the set +time since last access has passed. The default value of 1 hour will +start evicting files from cache that haven't been accessed for 1 hour. +When a cached file is accessed the 1 hour timer is reset to 0 and will +wait for 1 more hour before evicting. Specify the time with standard +notation, s, m, h, d, w . + +You should not run two copies of rclone using the same VFS cache with +the same or overlapping remotes if using --vfs-cache-mode > off. This +can potentially cause data corruption if you do. You can work around +this by giving each rclone its own cache hierarchy with --cache-dir. You +don't need to worry about this if the remotes in use don't overlap. + +--vfs-cache-mode off + +In this mode (the default) the cache will read directly from the remote +and write directly to the remote without caching anything on disk. + +This will mean some operations are not possible + +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried + +--vfs-cache-mode minimal + +This is very similar to "off" except that files opened for read AND +write will be buffered to disk. This means that files opened for write +will be a lot more compatible, but uses the minimal disk space. + +These operations are not possible + +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried + +--vfs-cache-mode writes + +In this mode files opened for read only are still read directly from the +remote, write only and read/write files are buffered to disk first. + +This mode should support all normal file system operations. + +If an upload fails it will be retried at exponentially increasing +intervals up to 1 minute. + +--vfs-cache-mode full + +In this mode all reads and writes are buffered to and from disk. When +data is read from the remote this is buffered to disk as well. + +In this mode the files in the cache will be sparse files and rclone will +keep track of which bits of the files it has downloaded. + +So if an application only reads the starts of each file, then rclone +will only buffer the start of the file. These files will appear to be +their full size in the cache, but they will be sparse files with only +the data that has been downloaded present in them. + +This mode should support all normal file system operations and is +otherwise identical to --vfs-cache-mode writes. + +When reading a file rclone will read --buffer-size plus --vfs-read-ahead +bytes ahead. The --buffer-size is buffered in memory whereas the +--vfs-read-ahead is buffered on disk. + +When using this mode it is recommended that --buffer-size is not set too +large and --vfs-read-ahead is set large if required. + +IMPORTANT not all file systems support sparse files. In particular +FAT/exFAT do not. Rclone will perform very badly if the cache directory +is on a filesystem which doesn't support sparse files and it will log an +ERROR message if one is detected. + +Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. Fingerprints are made from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). + +For example hash is slow with the local and sftp backends as they have +to read the entire file and hash it, and modtime is slow with the s3, +swift, ftp and qinqstor backends because they need to do an extra API +call to fetch it. + +If you use the --vfs-fast-fingerprint flag then rclone will not include +the slow operations in the fingerprint. This makes the fingerprinting +less accurate but much faster and will improve the opening time of +cached files. + +If you are running a vfs cache over local, s3 or swift backends then +using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. + +VFS Chunked Reading + +When rclone reads files from a remote it reads them in chunks. This +means that rather than requesting the whole file rclone reads the chunk +specified. This can reduce the used download quota for some remotes by +requesting only chunks from the remote that are actually read, at the +cost of an increased number of requests. + +These flags control the chunking: + + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + +Rclone will start reading a chunk of size --vfs-read-chunk-size, and +then double the size for each read. When --vfs-read-chunk-size-limit is +specified, and greater than --vfs-read-chunk-size, the chunk size for +each open file will get doubled only until the specified value is +reached. If the value is "off", which is the default, the limit is +disabled and the chunk size will grow indefinitely. + +With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the +following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, +300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, +the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, +1200M-1700M and so on. + +Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading. + +VFS Performance + +These flags may be used to enable/disable features of the VFS for +performance or other reasons. See also the chunked reading feature. + +In particular S3 and Swift benefit hugely from the --no-modtime flag (or +use --use-server-modtime for a slightly different effect) as each read +of the modification time takes a transaction. + + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. + +Sometimes rclone is delivered reads or writes out of order. Rather than +seeking rclone will wait a short time for the in sequence read or write +to come in. These flags only come into effect when not using an on disk +cache file. + + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + +When using VFS write caching (--vfs-cache-mode with value writes or +full), the global flag --transfers can be set to adjust the number of +parallel uploads of modified files from the cache (the related global +flag --checkers has no effect on the VFS). + + --transfers int Number of file transfers to run in parallel (default 4) + +VFS Case Sensitivity + +Linux file systems are case-sensitive: two files can differ only by +case, and the exact case must be used when opening a file. + +File systems in modern Windows are case-insensitive but case-preserving: +although existing files can be opened using any case, the exact case +used to create the file is preserved and available for programs to +query. It is not allowed for two files in the same directory to differ +only by case. + +Usually file systems on macOS are case-insensitive. It is possible to +make macOS file systems case-sensitive but that is not the default. + +The --vfs-case-insensitive VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the +remote as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. + +The user may specify a file name to open/delete/rename/etc with a case +different than what is stored on the remote. If an argument refers to an +existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the +same name is not found but a name differing only by case exists, rclone +will transparently fixup the name. This fixup happens only when an +existing file is requested. Case sensitivity of file names created anew +by rclone is controlled by the underlying remote. + +Note that case sensitivity of the operating system running rclone (the +target) may differ from case sensitivity of a file system presented by +rclone (the source). The flag controls whether "fixup" is performed to +satisfy the target. + +If the flag is not provided on the command line, then its default value +depends on the operating system where rclone runs: "true" on Windows and +macOS, "false" otherwise. If the flag is provided without a value, then +it is "true". + +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + +VFS Disk Options + +This flag allows you to manually set the statistics about the filing +system. It can be useful when those statistics cannot be read correctly +automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + +Alternate report of used bytes + +Some backends, most notably S3, do not report the amount of bytes used. +If you need this information to be available when running df on the +filesystem, then pass the flag --vfs-used-is-size to rclone. With this +flag set, instead of relying on the backend to report this information, +rclone will scan the whole remote similar to rclone size and compute the +total used space itself. + +WARNING. Contrary to rclone size, this flag ignores filters so that the +result is accurate. However, this is very inefficient and may cost lots +of API calls resulting in extra charges. Use it as a last resort and +only with caching. + + rclone nfsmount remote:path /path/to/mountpoint [flags] + +Options + + --addr string IPaddress:Port or :Port to bind server to + --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows) + --allow-other Allow access to other users (not supported on Windows) + --allow-root Allow access to root user (not supported on Windows) + --async-read Use asynchronous reads (not supported on Windows) (default true) + --attr-timeout Duration Time for which file/directory attributes are cached (default 1s) + --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows) + --daemon-timeout Duration Time limit for rclone to respond to kernel (not supported on Windows) (default 0s) + --daemon-wait Duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s) + --debug-fuse Debug the FUSE internals - needs -v + --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows) + --devname string Set the device name - default is remote:path + --dir-cache-time Duration Time to cache directory entries for (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required) + --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) + -h, --help help for nfsmount + --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki) + --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset) + --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only) + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) + --no-checksum Don't compare checksums on up/download + --no-modtime Don't read/write the modification time (can speed things up) + --no-seek Don't allow seeking in files + --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true) + --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) + -o, --option stringArray Option for libfuse/WinFsp (repeat if required) + --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) + --read-only Only allow read-only access + --sudo Use sudo to run the mount command as root. + --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) + --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) + --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) + --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-refresh Refreshes the directory cache recursively in the background on start + --vfs-used-is-size rclone size Use the rclone size algorithm for Used size + --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) + --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) + --volname string Set the volume name (supported on Windows and OSX only) + --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) + +Filter Options + +Flags for filtering directory listings. + + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) + +See the global flags page for global options not listed here. + +SEE ALSO + +- rclone - Show help for rclone commands, flags and backends. + rclone obscure Obscure password for use in the rclone config file. @@ -6476,6 +7551,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -6519,6 +7616,7 @@ Options --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -6531,7 +7629,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -6921,6 +8019,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -6982,6 +8102,7 @@ Options --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -6994,7 +8115,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -7368,6 +8489,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -7485,6 +8628,7 @@ Options --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default "anonymous") + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -7497,7 +8641,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -8008,6 +9152,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -8134,6 +9300,7 @@ Options --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -8146,7 +9313,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -8207,7 +9374,12 @@ and port using --addr flag. Modifying files through NFS protocol requires VFS caching. Usually you will need to specify --vfs-cache-mode in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache -mode, the mount will be read-only. +mode, the mount will be read-only. Note also that +--nfs-cache-handle-limit controls the maximum number of cached file +handles stored by the caching handler. This should not be set too low or +you may experience errors when trying to access files. The default is +1000000, but consider lowering this limit if the server's system +resource usage causes problems. To serve NFS over the network use following command: @@ -8532,6 +9704,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -8564,6 +9758,7 @@ Options --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for nfs + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) --no-checksum Don't compare checksums on up/download --no-modtime Don't read/write the modification time (can speed things up) --no-seek Don't allow seeking in files @@ -8571,6 +9766,7 @@ Options --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -8583,7 +9779,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -9279,6 +10475,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -9331,6 +10549,7 @@ Options --server-write-timeout Duration Timeout for server writing data (default 1h0m0s) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -9343,7 +10562,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -9748,6 +10967,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -9865,6 +11106,7 @@ Options --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -9877,7 +11119,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -10418,6 +11660,28 @@ depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The --no-unicode-normalization flag controls whether a similar "fixup" +is performed for filenames that differ but are canonically equivalent +with respect to unicode. Unicode normalization can be particularly +helpful for users of macOS, which prefers form NFD instead of the NFC +used by most other platforms. It is therefore highly recommended to keep +the default of false on macOS, to avoid encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +--vfs-block-norm-dupes flag allows hiding these duplicates. This comes +with a performance tradeoff, as rclone will have to scan the entire +directory for duplicates when listing a directory. For this reason, it +is recommended to leave this disabled if not needed. However, macOS +users may wish to consider using it, as otherwise, if a remote directory +contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the +mount, and both will appear to be editable, however, editing either +version will actually result in only the NFD version getting edited +under the hood. --vfs-block- norm-dupes prevents this confusion by +detecting this scenario, hiding the duplicates, and logging an error, +similar to how this is handled in rclone sync. + VFS Disk Options This flag allows you to manually set the statistics about the filing @@ -10546,6 +11810,7 @@ Options --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -10558,7 +11823,7 @@ Options --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -11290,17 +12555,20 @@ This can be used when scripting to make aged backups efficiently, e.g. Metadata support -Metadata is data about a file which isn't the contents of the file. -Normally rclone only preserves the modification time and the content -(MIME) type where possible. +Metadata is data about a file (or directory) which isn't the contents of +the file (or directory). Normally rclone only preserves the modification +time and the content (MIME) type where possible. -Rclone supports preserving all the available metadata on files (not -directories) when using the --metadata or -M flag. +Rclone supports preserving all the available metadata on files and +directories when using the --metadata or -M flag. Exactly what metadata is supported and what that support means depends on the backend. Backends that support metadata have a metadata section in their docs and are listed in the features table (Eg local, s3) +Some backends don't support metadata, some only support metadata on +files and some support metadata on both files and directories. + Rclone only supports a one-time sync of metadata. This means that metadata will be synced from the source object to the destination object only when the source object has changed and needs to be re-uploaded. If @@ -11320,6 +12588,13 @@ The --metadata-mapper flag can be used to pass the name of a program in which can transform metadata when it is being copied from source to destination. +Rclone supports --metadata-set and --metadata-mapper when doing sever +side Move and server side Copy, but not when doing server side DirMove +(renaming a directory) as this would involve recursing into the +directory. Note that you can disable DirMove with --disable DirMove and +rclone will revert back to using Move for each individual object where +--metadata-set and --metadata-mapper are supported. + Types of metadata Metadata is divided into two type. System metadata and User metadata. @@ -11977,6 +13252,24 @@ NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly! +--fix-case + +Normally, a sync to a case insensitive dest (such as macOS / Windows) +will not result in a matching filename if the source and dest filenames +have casing differences but are otherwise identical. For example, +syncing hello.txt to HELLO.txt will normally result in the dest filename +remaining HELLO.txt. If --fix-case is set, then HELLO.txt will be +renamed to hello.txt to match the source. + +NB: - directory names with incorrect casing will also be fixed - +--fix-case will be ignored if --immutable is set - using +--local-case-sensitive instead is not advisable; it will cause HELLO.txt +to get deleted! - the old dest filename must not be excluded by filters. +Be especially careful with --files-from, which does not respect +--ignore-case! - on remotes that do not support server-side move, +--fix-case will require downloading the file and re-uploading it. To +avoid this, do not use --fix-case. + --fs-cache-expire-duration=TIME When using rclone via the API rclone caches created remotes for 5 @@ -12396,10 +13689,10 @@ some context for the Metadata which may be important. - DstFs is the config string for the remote that the object is being copied to - DstFsType is the name of the destination backend. -- Remote is the path of the file relative to the root. -- Size, MimeType, ModTime are attributes of the file. +- Remote is the path of the object relative to the root. +- Size, MimeType, ModTime are attributes of the object. - IsDir is true if this is a directory (not yet implemented). -- ID is the source ID of the file if known. +- ID is the source ID of the object if known. - Metadata is the backend specific metadata as described in the backend docs. @@ -12521,7 +13814,7 @@ When transferring files above SIZE to capable backends, rclone will use multiple threads to transfer the file (default 256M). Capable backends are marked in the overview as MultithreadUpload. (They -need to implement either the OpenWriterAt or OpenChunkedWriter internal +need to implement either the OpenWriterAt or OpenChunkWriter internal interfaces). These include include, local, s3, azureblob, b2, oracleobjectstorage and smb at the time of writing. @@ -12631,6 +13924,11 @@ files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also (e.g. the Google Drive client). +--no-update-dir-modtime + +When using this flag, rclone won't update modification times of remote +directories if they are incorrect as it would normally. + --order-by string The --order-by flag controls the order in which files in the backlog are @@ -13719,14 +15017,14 @@ Use web browser to automatically authenticate? question. Execute the following on the machine with the web browser (same rclone version recommended): - rclone authorize "amazon cloud drive" + rclone authorize "dropbox" Then paste the result below: result> Then on your main desktop machine - rclone authorize "amazon cloud drive" + rclone authorize "dropbox" If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... @@ -14319,7 +15617,7 @@ E.g. for an alternative filter-file.txt: - * Files file1.jpg, file3.png and file2.avi are listed whilst secret17.jpg -and files without the suffix .jpgor.png` are excluded. +and files without the suffix .jpg or .png are excluded. E.g. for an alternative filter-file.txt: @@ -15273,6 +16571,26 @@ See the config password command for more information on the above. Authentication is required for this call. +config/paths: Reads the config file path and other important paths. + +Returns a JSON object with the following keys: + +- config: path to config file +- cache: path to root of cache directory +- temp: path to root of temporary directory + +Eg + + { + "cache": "/home/USER/.cache/rclone", + "config": "/home/USER/.rclone.conf", + "temp": "/tmp" + } + +See the config paths command for more information on the above. + +Authentication is required for this call. + config/providers: Shows how providers are configured in the config file. Returns a JSON object: - providers - array of objects @@ -16082,6 +17400,52 @@ instead: rclone rc --loopback operations/fsinfo fs=remote: +operations/hashsum: Produces a hashsum file for all the objects in the path. + +Produces a hash file for all the objects in the path using the hash +named. The output is in the same format as the standard md5sum/sha1sum +tool. + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" for the source, "/" for + local filesystem + - this can point to a file and just that file will be returned in + the listing. +- hashType - type of hash to be used +- download - check by downloading rather than with hash (boolean) +- base64 - output the hashes in base64 rather than hex (boolean) + +If you supply the download flag, it will download the data from the +remote and create the hash on the fly. This can be useful for remotes +that don't support the given hash or if you really want to check all the +data. + +Note that if you wish to supply a checkfile to check hashes against the +current files then you should use operations/check instead of +operations/hashsum. + +Returns: + +- hashsum - array of strings of the hashes +- hashType - type of hash used + +Example: + + $ rclone rc --loopback operations/hashsum fs=bin hashType=MD5 download=true base64=true + { + "hashType": "md5", + "hashsum": [ + "WTSVLpuiXyJO_kGzJerRLg== backend-versions.sh", + "v1b_OlWCJO9LtNq3EIKkNQ== bisect-go-rclone.sh", + "VHbmHzHh4taXzgag8BAIKQ== bisect-rclone.sh", + ] + } + +See the hashsum command for more information on the above. + +Authentication is required for this call. + operations/list: List the given remote and path in JSON format This takes the following parameters: @@ -16463,7 +17827,11 @@ This takes the following parameters - resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync. Use at your own risk! - workdir - server directory for history files (default: - /home/ncw/.cache/rclone/bisync) + ~/.cache/rclone/bisync) +- backupdir1 - --backup-dir for Path1. Must be a non-overlapping path + on the same remote. +- backupdir2 - --backup-dir for Path2. Must be a non-overlapping path + on the same remote. - noCleanup - retain working files See bisync command help and full bisync description for more @@ -16829,7 +18197,6 @@ Here is an overview of the major features of each cloud storage system. ------------------------------- ------------------- --------- ------------------ ----------------- ----------- ---------- 1Fichier Whirlpool - No Yes R - Akamai Netstorage MD5, SHA256 R/W No No R - - Amazon Drive MD5 - Yes No R - Amazon S3 (or S3 compatible) MD5 R/W No No R/W RWU Backblaze B2 SHA1 R/W No No R/W - Box SHA1 R/W Yes No - - @@ -16838,7 +18205,7 @@ Here is an overview of the major features of each cloud storage system. Enterprise File Fabric - R/W Yes No R/W - FTP - R/W ¹⁰ No No - - Google Cloud Storage MD5 R/W No No R/W - - Google Drive MD5, SHA1, SHA256 R/W No Yes R/W - + Google Drive MD5, SHA1, SHA256 DR/W No Yes R/W DRWU Google Photos - - No Yes R - HDFS - R/W No No - - HiDrive HiDrive ¹² R/W No No - - @@ -16852,7 +18219,7 @@ Here is an overview of the major features of each cloud storage system. Memory MD5 R/W No No - - Microsoft Azure Blob Storage MD5 R/W No No R/W - Microsoft Azure Files Storage MD5 R/W Yes No R/W - - Microsoft OneDrive QuickXorHash ⁵ R/W Yes No R - + Microsoft OneDrive QuickXorHash ⁵ DR/W Yes No R DRW OpenDrive MD5 R/W Yes Partial ⁸ - - OpenStack Swift MD5 R/W No No R/W - Oracle Object Storage MD5 R/W No No R/W - @@ -16864,7 +18231,7 @@ Here is an overview of the major features of each cloud storage system. QingStor MD5 - ⁹ No No R/W - Quatrix by Maytech - R/W No No - - Seafile - - No No - - - SFTP MD5, SHA1 ² R/W Depends No - - + SFTP MD5, SHA1 ² DR/W Depends No - - Sia - - No No - - SMB - R/W Yes No - - SugarSync - - No No - - @@ -16873,7 +18240,7 @@ Here is an overview of the major features of each cloud storage system. WebDAV MD5, SHA1 ³ R ⁴ Depends No - - Yandex Disk MD5 R/W No No R - Zoho WorkDrive - - No No - - - The local filesystem All R/W Depends No - RWU + The local filesystem All DR/W Depends No - DRWU ¹ Dropbox supports its own custom hash. This is an SHA256 sum of all the 4 MiB block SHA256s. @@ -16925,13 +18292,31 @@ ModTime Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that -represent the time of the upload. To be relevant for syncing it should +represents the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it. + ----------------------------------------------------------------------- + Key Explanation + ------------------- --------------------------------------------------- + - ModTimes not supported - times likely the upload + time + + R ModTimes supported on files but can't be changed + without re-upload + + R/W Read and Write ModTimes fully supported on files + + DR ModTimes supported on files and directories but + can't be changed without re-upload + + DR/W Read and Write ModTimes fully supported on files + and directories + ----------------------------------------------------------------------- + Storage systems with a - in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. It is most likely the time the file was uploaded, or possibly something @@ -16951,6 +18336,9 @@ time only on a files in a mount will be silently ignored. Storage systems with R/W (for read/write) in the ModTime column, means they do also support modtime-only operations. +Storage systems with D in the ModTime column means that the following +symbols apply to directories as well as files. + Case Insensitive If a cloud storage systems is case sensitive then it is possible to have @@ -17298,11 +18686,24 @@ backend) and/or user metadata (general purpose metadata). The levels of metadata support are - Key Explanation - ----- ----------------------------------------------------------------- - R Read only System Metadata - RW Read and write System Metadata - RWU Read and write System Metadata and read and write User Metadata + ----------------------------------------------------------------------- + Key Explanation + ------------------- --------------------------------------------------- + R Read only System Metadata on files only + + RW Read and write System Metadata on files only + + RWU Read and write System Metadata and read and write + User Metadata on files only + + DR Read only System Metadata on files and directories + + DRW Read and write System Metadata on files and + directories + + DRWU Read and write System Metadata and read and write + User Metadata on files and directories + ----------------------------------------------------------------------- See the metadata docs for more info. @@ -17319,8 +18720,6 @@ upon backend-specific capabilities. Akamai Yes No No No No Yes Yes No No No Yes Netstorage - Amazon Drive Yes No Yes Yes No No No No No No Yes - Amazon S3 (or No Yes No No Yes Yes Yes Yes Yes No No S3 compatible) @@ -17351,6 +18750,8 @@ upon backend-specific capabilities. HTTP No No No No No No No No No No Yes + ImageKit Yes Yes Yes No No No No No No No Yes + Internet No Yes No No Yes Yes No No Yes Yes No Archive @@ -17415,7 +18816,7 @@ upon backend-specific capabilities. Zoho WorkDrive Yes Yes Yes Yes No No No No No Yes Yes - The local Yes No Yes Yes No No Yes Yes No Yes Yes + The local No No Yes Yes No No Yes Yes No Yes Yes filesystem ------------------------------------------------------------------------------------------------------------------------------------- @@ -17532,7 +18933,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -17546,6 +18947,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -17563,6 +18965,7 @@ Flags just used for rclone sync. --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -17609,7 +19012,7 @@ General networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0") Performance @@ -17766,14 +19169,7 @@ Backend Backend only flags. These can be set in the config file also. - --acd-auth-url string Auth server URL - --acd-client-id string OAuth Client Id - --acd-client-secret string OAuth Client Secret - --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi) - --acd-token string OAuth Access Token as a JSON blob - --acd-token-url string Token server url - --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s) + --alias-description string Description of the remote --alias-remote string Remote or path to alias --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name @@ -17784,6 +19180,8 @@ Backend only flags. These can be set in the config file also. --azureblob-client-id string The ID of the client in use --azureblob-client-secret string One of the service principal's client secrets --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth + --azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion + --azureblob-description string Description of the remote --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-disable-checksum Don't store MD5 checksum with object metadata --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) @@ -17814,6 +19212,7 @@ Backend only flags. These can be set in the config file also. --azurefiles-client-secret string One of the service principal's client secrets --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth --azurefiles-connection-string string Azure Files Connection String + --azurefiles-description string Description of the remote --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot) --azurefiles-endpoint string Endpoint for the service --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI) @@ -17833,8 +19232,9 @@ Backend only flags. These can be set in the config file also. --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) + --b2-description string Description of the remote --b2-disable-checksum Disable checksums for large (> upload cutoff) files - --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w) + --b2-download-auth-duration Duration Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-url string Custom endpoint for downloads --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service @@ -17853,6 +19253,7 @@ Backend only flags. These can be set in the config file also. --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) + --box-description string Description of the remote --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) @@ -17869,6 +19270,7 @@ Backend only flags. These can be set in the config file also. --cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend") --cache-db-purge Clear all the cached data for this remote on start --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-description string Description of the remote --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) @@ -17882,15 +19284,19 @@ Backend only flags. These can be set in the config file also. --cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-writes Cache file data on writes through the FS --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi) + --chunker-description string Description of the remote --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk + --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining + --compress-description string Description of the remote --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress -L, --copy-links Follow symlinks and copy the pointed to item + --crypt-description string Description of the remote --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true) --crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32") --crypt-filename-encryption string How to encrypt the filenames (default "standard") @@ -17901,6 +19307,7 @@ Backend only flags. These can be set in the config file also. --crypt-remote string Remote to encrypt/decrypt --crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead --crypt-show-mapping For all files listed show how the names encrypt + --crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted --crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin") --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded --drive-allow-import-name-change Allow the filetype to change when uploading Google docs @@ -17910,6 +19317,7 @@ Backend only flags. These can be set in the config file also. --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut + --drive-description string Description of the remote --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) @@ -17958,6 +19366,7 @@ Backend only flags. These can be set in the config file also. --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret + --dropbox-description string Description of the remote --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) @@ -17967,10 +19376,12 @@ Backend only flags. These can be set in the config file also. --dropbox-token-url string Token server url --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-cdn Set if you wish to use CDN download links + --fichier-description string Description of the remote --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-shared-folder string If you want to download a shared folder, add this parameter + --filefabric-description string Description of the remote --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-permanent-token string Permanent Authentication Token --filefabric-root-folder-id string ID of the root folder @@ -17981,6 +19392,7 @@ Backend only flags. These can be set in the config file also. --ftp-ask-password Allow asking for FTP password when needed --ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s) --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited + --ftp-description string Description of the remote --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) @@ -18006,6 +19418,7 @@ Backend only flags. These can be set in the config file also. --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-decompress If set this will decompress gzip encoded objects + --gcs-description string Description of the remote --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-endpoint string Endpoint for the service @@ -18026,6 +19439,7 @@ Backend only flags. These can be set in the config file also. --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s) --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret + --gphotos-description string Description of the remote --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-include-archived Also view and download archived media --gphotos-read-only Set to make the Google Photos backend read only @@ -18034,10 +19448,12 @@ Backend only flags. These can be set in the config file also. --gphotos-token string OAuth Access Token as a JSON blob --gphotos-token-url string Token server url --hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default) + --hasher-description string Description of the remote --hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy + --hdfs-description string Description of the remote --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-namenode CommaSepList Hadoop name nodes and ports --hdfs-service-principal-name string Kerberos service principal name for the namenode @@ -18046,6 +19462,7 @@ Backend only flags. These can be set in the config file also. --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) --hidrive-client-id string OAuth Client Id --hidrive-client-secret string OAuth Client Secret + --hidrive-description string Description of the remote --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot) --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") @@ -18056,10 +19473,12 @@ Backend only flags. These can be set in the config file also. --hidrive-token-url string Token server url --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) + --http-description string Description of the remote --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / --http-url string URL of HTTP host to connect to + --imagekit-description string Description of the remote --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket) --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true @@ -18068,6 +19487,7 @@ Backend only flags. These can be set in the config file also. --imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2" --imagekit-versions Include old versions in directory listings --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-description string Description of the remote --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") @@ -18077,6 +19497,7 @@ Backend only flags. These can be set in the config file also. --jottacloud-auth-url string Auth server URL --jottacloud-client-id string OAuth Client Id --jottacloud-client-secret string OAuth Client Secret + --jottacloud-description string Description of the remote --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -18085,6 +19506,7 @@ Backend only flags. These can be set in the config file also. --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) + --koofr-description string Description of the remote --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use --koofr-mountid string Mount ID of the mount to use @@ -18092,10 +19514,12 @@ Backend only flags. These can be set in the config file also. --koofr-provider string Choose your storage provider --koofr-setmtime Does the backend support setting modification time (default true) --koofr-user string Your user name + --linkbox-description string Description of the remote --linkbox-token string Token from https://www.linkbox.to/admin/account -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive + --local-description string Description of the remote --local-encoding Encoding The encoding for the backend (default Slash,Dot) --local-no-check-updated Don't check to see if the files change during upload --local-no-preallocate Disable preallocation of disk space for transferred files @@ -18108,6 +19532,7 @@ Backend only flags. These can be set in the config file also. --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-client-id string OAuth Client Id --mailru-client-secret string OAuth Client Secret + --mailru-description string Description of the remote --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) @@ -18118,12 +19543,15 @@ Backend only flags. These can be set in the config file also. --mailru-token-url string Token server url --mailru-user string User name (usually email) --mega-debug Output more debug from Mega + --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash --mega-pass string Password (obscured) --mega-use-https Use HTTPS for transfers --mega-user string User name + --memory-description string Description of the remote --netstorage-account string Set the NetStorage account name + --netstorage-description string Description of the remote --netstorage-host string Domain+path of NetStorage host to connect to --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) @@ -18135,6 +19563,7 @@ Backend only flags. These can be set in the config file also. --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-delta If set rclone will use delta listing to implement recursive listings + --onedrive-description string Description of the remote --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -18144,6 +19573,7 @@ Backend only flags. These can be set in the config file also. --onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous") --onedrive-link-type string Set the type of the links created by the link command (default "view") --onedrive-list-chunk int Size of listing chunk (default 1000) + --onedrive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off) --onedrive-no-versions Remove all versions on modifying operations --onedrive-region string Choose national cloud region for OneDrive (default "global") --onedrive-root-folder-id string ID of the root folder @@ -18157,6 +19587,7 @@ Backend only flags. These can be set in the config file also. --oos-config-profile string Profile name inside the oci config file (default "Default") --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-timeout Duration Timeout for copy (default 1m0s) + --oos-description string Description of the remote --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API @@ -18175,12 +19606,14 @@ Backend only flags. These can be set in the config file also. --oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) + --opendrive-description string Description of the remote --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password (obscured) --opendrive-username string Username --pcloud-auth-url string Auth server URL --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret + --pcloud-description string Description of the remote --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") --pcloud-password string Your pcloud password (obscured) @@ -18191,6 +19624,7 @@ Backend only flags. These can be set in the config file also. --pikpak-auth-url string Auth server URL --pikpak-client-id string OAuth Client Id --pikpak-client-secret string OAuth Client Secret + --pikpak-description string Description of the remote --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-pass string Pikpak password (obscured) @@ -18203,11 +19637,13 @@ Backend only flags. These can be set in the config file also. --premiumizeme-auth-url string Auth server URL --premiumizeme-client-id string OAuth Client Id --premiumizeme-client-secret string OAuth Client Secret + --premiumizeme-description string Description of the remote --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token-url string Token server url --protondrive-2fa string The 2FA code --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") + --protondrive-description string Description of the remote --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) @@ -18218,12 +19654,14 @@ Backend only flags. These can be set in the config file also. --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret + --putio-description string Description of the remote --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-token string OAuth Access Token as a JSON blob --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) + --qingstor-description string Description of the remote --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-env-auth Get QingStor credentials from runtime @@ -18232,18 +19670,21 @@ Backend only flags. These can be set in the config file also. --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to --quatrix-api-key string API key for accessing Quatrix account + --quatrix-description string Description of the remote --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-host string Host name of Quatrix account --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) + --quatrix-skip-project-folders Skip project folders in operations --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --s3-decompress If set this will decompress gzip encoded objects + --s3-description string Description of the remote --s3-directory-markers Upload an empty object with a trailing slash when a new directory is created --s3-disable-checksum Don't store MD5 checksum with object metadata --s3-disable-http2 Disable usage of http2 for S3 backends @@ -18278,19 +19719,22 @@ Backend only flags. These can be set in the config file also. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key --s3-storage-class string The storage class to use when storing new objects in S3 --s3-sts-endpoint string Endpoint for STS - --s3-upload-concurrency int Concurrency for multipart uploads (default 4) + --s3-upload-concurrency int Concurrency for multipart uploads and copies (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) + --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --s3-version-at Time Show file versions as they were at the specified time (default off) + --s3-version-deleted Show deleted file markers when using versions --s3-versions Include old versions in directory listings --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist + --seafile-description string Description of the remote --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library --seafile-library-key string Library password (for encrypted libraries only) (obscured) @@ -18302,6 +19746,7 @@ Backend only flags. These can be set in the config file also. --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-copy-is-hardlink Set to enable server side copies using hardlinks + --sftp-description string Description of the remote --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -18336,6 +19781,7 @@ Backend only flags. These can be set in the config file also. --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-client-id string OAuth Client Id --sharefile-client-secret string OAuth Client Secret + --sharefile-description string Description of the remote --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder @@ -18344,10 +19790,12 @@ Backend only flags. These can be set in the config file also. --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") + --sia-description string Description of the remote --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --skip-links Don't warn about skipped symlinks --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) + --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true) @@ -18359,6 +19807,7 @@ Backend only flags. These can be set in the config file also. --smb-user string SMB username (default "$USER") --storj-access-grant string Access grant --storj-api-key string API key + --storj-description string Description of the remote --storj-passphrase string Encryption passphrase --storj-provider string Choose an authentication method (default "existing") --storj-satellite-address string Satellite address (default "us1.storj.io") @@ -18367,6 +19816,7 @@ Backend only flags. These can be set in the config file also. --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id + --sugarsync-description string Description of the remote --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key @@ -18380,6 +19830,7 @@ Backend only flags. These can be set in the config file also. --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) + --swift-description string Description of the remote --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") @@ -18399,17 +19850,21 @@ Backend only flags. These can be set in the config file also. --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") + --union-description string Description of the remote --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token + --uptobox-description string Description of the remote --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-private Set to make uploaded files private --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token + --webdav-description string Description of the remote --webdav-encoding string The encoding for the backend --webdav-headers CommaSepList Set HTTP headers for all transactions --webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi) + --webdav-owncloud-exclude-shares Exclude ownCloud shares --webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to @@ -18418,6 +19873,7 @@ Backend only flags. These can be set in the config file also. --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret + --yandex-description string Description of the remote --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-token string OAuth Access Token as a JSON blob @@ -18425,6 +19881,7 @@ Backend only flags. These can be set in the config file also. --zoho-auth-url string Auth server URL --zoho-client-id string OAuth Client Id --zoho-client-secret string OAuth Client Secret + --zoho-description string Description of the remote --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-region string Zoho region to connect to --zoho-token string OAuth Access Token as a JSON blob @@ -18934,20 +20391,36 @@ and verify that settings did update: If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first. +Bisync + +bisync is in beta and is considered an advanced command, so use with +care. Make sure you have read and understood the entire manual +(especially the Limitations section) before using, or data loss can +result. Questions can be asked in the Rclone Forum. + Getting started - Install rclone and setup your remotes. - Bisync will create its working directory at ~/.cache/rclone/bisync - on Linux or C:\Users\MyLogin\AppData\Local\rclone\bisync on Windows. - Make sure that this location is writable. + on Linux, /Users/yourusername/Library/Caches/rclone/bisync on Mac, + or C:\Users\MyLogin\AppData\Local\rclone\bisync on Windows. Make + sure that this location is writable. - Run bisync with the --resync flag, specifying the paths to the local and remote sync directory roots. -- For successive sync runs, leave off the --resync flag. +- For successive sync runs, leave off the --resync flag. (Important!) - Consider using a filters file for excluding unnecessary files and directories from the sync. - Consider setting up the --check-access feature for safety. -- On Linux, consider setting up a crontab entry. bisync can safely run - in concurrent cron jobs thanks to lock files it maintains. +- On Linux or Mac, consider setting up a crontab entry. bisync can + safely run in concurrent cron jobs thanks to lock files it + maintains. + +For example, your first command might look like this: + + rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run + +If all looks good, run it again without --dry-run. After that, remove +--resync as well. Here is a typical run log (with timestamps removed for clarity): @@ -19004,36 +20477,36 @@ Command line syntax Type 'rclone listremotes' for list of configured remotes. Optional Flags: - --check-access Ensure expected `RCLONE_TEST` files are found on - both Path1 and Path2 filesystems, else abort. - --check-filename FILENAME Filename for `--check-access` (default: `RCLONE_TEST`) - --check-sync CHOICE Controls comparison of final listings: - `true | false | only` (default: true) - If set to `only`, bisync will only compare listings - from the last run but skip actual sync. - --filters-file PATH Read filtering patterns from a file - --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. - If exceeded, the bisync run will abort. (default: 50%) - --force Bypass `--max-delete` safety check and run the sync. - Consider using with `--verbose` - --create-empty-src-dirs Sync creation and deletion of empty directories. - (Not compatible with --remove-empty-dirs) - --remove-empty-dirs Remove empty directories at the final cleanup step. - -1, --resync Performs the resync run. - Warning: Path1 files may overwrite Path2 versions. - Consider using `--verbose` or `--dry-run` first. - --ignore-listing-checksum Do not use checksums for listings - (add --ignore-checksum to additionally skip post-copy checksum checks) - --resilient Allow future runs to retry after certain less-serious errors, - instead of requiring --resync. Use at your own risk! - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --workdir PATH Use custom working directory (useful for testing). - (default: `~/.cache/rclone/bisync`) - -n, --dry-run Go through the motions - No files are copied/deleted. - -v, --verbose Increases logging verbosity. - May be specified more than once for more details. - -h, --help help for bisync + --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote. + --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote. + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --compare string Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime') + --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num) + --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none") + --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict') + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --no-slow-hash Ignore listing checksums only on backends where they are slow + --recover Automatically recover from interruptions without requiring --resync. + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first. + --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none") + --retries int Retry operations this many times if they fail (requires --resilient). (default 3) + --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) + --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls. + --workdir string Use custom working dir - useful for testing. (default: {WORKDIR}) + --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%) + -n, --dry-run Go through the motions - No files are copied/deleted. + -v, --verbose Increases logging verbosity. May be specified more than once for more details. Arbitrary rclone flags may be specified on the bisync command line, for example @@ -19069,28 +20542,25 @@ Command-line flags --resync This will effectively make both Path1 and Path2 filesystems contain a -matching superset of all files. Path2 files that do not exist in Path1 -will be copied to Path1, and the process will then copy the Path1 tree -to Path2. +matching superset of all files. By default, Path2 files that do not +exist in Path1 will be copied to Path1, and the process will then copy +the Path1 tree to Path2. -The --resync sequence is roughly equivalent to: +The --resync sequence is roughly equivalent to the following (but see +--resync-mode for other options): - rclone copy Path2 Path1 --ignore-existing - rclone copy Path1 Path2 - -Or, if using --create-empty-src-dirs: - - rclone copy Path2 Path1 --ignore-existing - rclone copy Path1 Path2 --create-empty-src-dirs - rclone copy Path2 Path1 --create-empty-src-dirs + rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs] + rclone copy Path1 Path2 [--create-empty-src-dirs] The base directories on both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid. When using --resync, a newer version of a file on the Path2 filesystem -will be overwritten by the Path1 filesystem version. (Note that this is -NOT entirely symmetrical.) Carefully evaluate deltas using --dry-run. +will (by default) be overwritten by the Path1 filesystem version. (Note +that this is NOT entirely symmetrical, and more symmetrical options can +be specified with the --resync-mode flag.) Carefully evaluate deltas +using --dry-run. For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a @@ -19102,6 +20572,100 @@ Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst This is a safety check that an unexpected empty path does not result in deleting everything in the other path. +Note that --resync implies --resync-mode path1 unless a different +--resync-mode is explicitly specified. It is not necessary to use both +the --resync and --resync-mode flags -- either one is sufficient without +the other. + +Note: --resync (including --resync-mode) should only be used under three +specific (rare) circumstances: 1. It is your first bisync run (between +these two paths) 2. You've just made changes to your bisync settings +(such as editing the contents of your --filters-file) 3. There was an +error on the prior run, and as a result, bisync now requires --resync to +recover + +The rest of the time, you should omit --resync. The reason is because +--resync will only copy (not sync) each side to the other. Therefore, if +you included --resync for every bisync run, it would never be possible +to delete a file -- the deleted file would always keep reappearing at +the end of every run (because it's being copied from the other side +where it still exists). Similarly, renaming a file would always result +in a duplicate copy (both old and new name) on both sides. + +If you find that frequent interruptions from #3 are an issue, rather +than automatically running --resync, the recommended alternative is to +use the --resilient, --recover, and --conflict-resolve flags, (along +with Graceful Shutdown mode, when needed) for a very robust +"set-it-and-forget-it" bisync setup that can automatically bounce back +from almost any interruption it might encounter. Consider adding +something like the following: + + --resilient --recover --max-lock 2m --conflict-resolve newer + +--resync-mode CHOICE + +In the event that a file differs on both sides during a --resync, +--resync-mode controls which version will overwrite the other. The +supported options are similar to --conflict-resolve. For all of the +following options, the version that is kept is referred to as the +"winner", and the version that is overwritten (deleted) is referred to +as the "loser". The options are named after the "winner": + +- path1 - (the default) - the version from Path1 is unconditionally + considered the winner (regardless of modtime and size, if any). This + can be useful if one side is more trusted or up-to-date than the + other, at the time of the --resync. +- path2 - same as path1, except the path2 version is considered the + winner. +- newer - the newer file (by modtime) is considered the winner, + regardless of which side it came from. This may result in having a + mix of some winners from Path1, and some winners from Path2. (The + implementation is analogous to running rclone copy --update in both + directions.) +- older - same as newer, except the older file is considered the + winner, and the newer file is considered the loser. +- larger - the larger file (by size) is considered the winner + (regardless of modtime, if any). This can be a useful option for + remotes without modtime support, or with the kinds of files (such as + logs) that tend to grow but not shrink, over time. +- smaller - the smaller file (by size) is considered the winner + (regardless of modtime, if any). + +For all of the above options, note the following: - If either of the +underlying remotes lacks support for the chosen method, it will be +ignored and will fall back to the default of path1. (For example, if +--resync-mode newer is set, but one of the paths uses a remote that +doesn't support modtime.) - If a winner can't be determined because the +chosen method's attribute is missing or equal, it will be ignored, and +bisync will instead try to determine whether the files differ by looking +at the other --compare methods in effect. (For example, if +--resync-mode newer is set, but the Path1 and Path2 modtimes are +identical, bisync will compare the sizes.) If bisync concludes that they +differ, preference is given to whichever is the "source" at that moment. +(In practice, this gives a slight advantage to Path2, as the 2to1 copy +comes before the 1to2 copy.) If the files do not differ, nothing is +copied (as both sides are already correct). - These options apply only +to files that exist on both sides (with the same name and relative +path). Files that exist only on one side and not the other are always +copied to the other, during --resync (this is one of the main +differences between resync and non-resync runs.). - --conflict-resolve, +--conflict-loser, and --conflict-suffix do not apply during --resync, +and unlike these flags, nothing is renamed during --resync. When a file +differs on both sides during --resync, one version always overwrites the +other (much like in rclone copy.) (Consider using --backup-dir to retain +a backup of the losing version.) - Unlike for --conflict-resolve, +--resync-mode none is not a valid option (or rather, it will be +interpreted as "no resync", unless --resync has also been specified, in +which case it will be ignored.) - Winners and losers are decided at the +individual file-level only (there is not currently an option to pick an +entire winning directory atomically, although the path1 and path2 +options typically produce a similar result.) - To maintain +backward-compatibility, the --resync flag implies --resync-mode path1 +unless a different --resync-mode is explicitly specified. Similarly, all +--resync-mode options (except none) imply --resync, so it is not +necessary to use both the --resync and --resync-mode flags +simultaneously -- either one is sufficient without the other. + --check-access Access check files are an additional safety measure against data loss. @@ -19141,6 +20705,145 @@ must exist, synchronized between your source and destination filesets, in order for --check-access to succeed. See --check-access for additional details. +--compare + +As of v1.66, bisync fully supports comparing based on any combination of +size, modtime, and checksum (lifting the prior restriction on backends +without modtime support.) + +By default (without the --compare flag), bisync inherits the same +comparison options as sync (that is: size and modtime by default, unless +modified with flags such as --checksum or --size-only.) + +If the --compare flag is set, it will override these defaults. This can +be useful if you wish to compare based on combinations not currently +supported in sync, such as comparing all three of size AND modtime AND +checksum simultaneously (or just modtime AND checksum). + +--compare takes a comma-separated list, with the currently supported +values being size, modtime, and checksum. For example, if you want to +compare size and checksum, but not modtime, you would do: + + --compare size,checksum + +Or if you want to compare all three: + + --compare size,modtime,checksum + +--compare overrides any conflicting flags. For example, if you set the +conflicting flags --compare checksum --size-only, --size-only will be +ignored, and bisync will compare checksum and not size. To avoid +confusion, it is recommended to use either --compare or the normal sync +flags, but not both. + +If --compare includes checksum and both remotes support checksums but +have no hash types in common with each other, checksums will be +considered only for comparisons within the same side (to determine what +has changed since the prior sync), but not for comparisons against the +opposite side. If one side supports checksums and the other does not, +checksums will only be considered on the side that supports them. + +When comparing with checksum and/or size without modtime, bisync cannot +determine whether a file is newer or older -- only whether it is changed +or unchanged. (If it is changed on both sides, bisync still does the +standard equality-check to avoid declaring a sync conflict unless it +absolutely has to.) + +It is recommended to do a --resync when changing --compare settings, as +otherwise your prior listing files may not contain the attributes you +wish to compare (for example, they will not have stored checksums if you +were not previously comparing checksums.) + +--ignore-listing-checksum + +When --checksum or --compare checksum is set, bisync will retrieve (or +generate) checksums (for backends that support them) when creating the +listings for both paths, and store the checksums in the listing files. +--ignore-listing-checksum will disable this behavior, which may speed +things up considerably, especially on backends (such as local) where +hashes must be computed on the fly instead of retrieved. Please note the +following: + +- As of v1.66, --ignore-listing-checksum is now automatically set when + neither --checksum nor --compare checksum are in use (as the + checksums would not be used for anything.) +- --ignore-listing-checksum is NOT the same as --ignore-checksum, and + you may wish to use one or the other, or both. In a nutshell: + --ignore-listing-checksum controls whether checksums are considered + when scanning for diffs, while --ignore-checksum controls whether + checksums are considered during the copy/sync operations that + follow, if there ARE diffs. +- Unless --ignore-listing-checksum is passed, bisync currently + computes hashes for one path even when there's no common hash with + the other path (for example, a crypt remote.) This can still be + beneficial, as the hashes will still be used to detect changes + within the same side (if --checksum or --compare checksum is set), + even if they can't be used to compare against the opposite side. +- If you wish to ignore listing checksums only on remotes where they + are slow to compute, consider using --no-slow-hash (or + --slow-hash-sync-only) instead of --ignore-listing-checksum. +- If --ignore-listing-checksum is used simultaneously with + --compare checksum (or --checksum), checksums will be ignored for + bisync deltas, but still considered during the sync operations that + follow (if deltas are detected based on modtime and/or size.) + +--no-slow-hash + +On some remotes (notably local), checksums can dramatically slow down a +bisync run, because hashes cannot be stored and need to be computed in +real-time when they are requested. On other remotes (such as drive), +they add practically no time at all. The --no-slow-hash flag will +automatically skip checksums on remotes where they are slow, while still +comparing them on others (assuming --compare includes checksum.) This +can be useful when one of your bisync paths is slow but you still want +to check checksums on the other, for a more robust sync. + +--slow-hash-sync-only + +Same as --no-slow-hash, except slow hashes are still considered during +sync calls. They are still NOT considered for determining deltas, nor or +they included in listings. They are also skipped during --resync. The +main use case for this flag is when you have a large number of files, +but relatively few of them change from run to run -- so you don't want +to check your entire tree every time (it would take too long), but you +still want to consider checksums for the smaller group of files for +which a modtime or size change was detected. Keep in mind that this +speed savings comes with a safety trade-off: if a file's content were to +change without a change to its modtime or size, bisync would not detect +it, and it would not be synced. + +--slow-hash-sync-only is only useful if both remotes share a common hash +type (if they don't, bisync will automatically fall back to +--no-slow-hash.) Both --no-slow-hash and --slow-hash-sync-only have no +effect without --compare checksum (or --checksum). + +--download-hash + +If --download-hash is set, bisync will use best efforts to obtain an MD5 +checksum by downloading and computing on-the-fly, when checksums are not +otherwise available (for example, a remote that doesn't support them.) +Note that since rclone has to download the entire file, this may +dramatically slow down your bisync runs, and is also likely to use a lot +of data, so it is probably not practical for bisync paths with a large +total file size. However, it can be a good option for syncing +small-but-important files with maximum accuracy (for example, a source +code repo on a crypt remote.) An additional advantage over methods like +cryptcheck is that the original file is not required for comparison (for +example, --download-hash can be used to bisync two different crypt +remotes with different passwords.) + +When --download-hash is set, bisync still looks for more efficient +checksums first, and falls back to downloading only when none are found. +It takes priority over conflicting flags such as --no-slow-hash. +--download-hash is not suitable for Google Docs and other files of +unknown size, as their checksums would change from run to run (due to +small variances in the internals of the generated export file.) +Therefore, bisync automatically skips --download-hash for files with a +size less than 0. + +See also: Hasher backend, cryptcheck command, rclone check --download +option, md5sum command + --max-delete As a safety check, if greater than the --max-delete percent of files @@ -19180,6 +20883,146 @@ to the hash stored in the .md5 file. If they don't match, the run aborts with a critical error and thus forces you to do a --resync, likely avoiding a disaster. +--conflict-resolve CHOICE + +In bisync, a "conflict" is a file that is new or changed on both sides +(relative to the prior run) AND is not currently identical on both +sides. --conflict-resolve controls how bisync handles such a scenario. +The currently supported options are: + +- none - (the default) - do not attempt to pick a winner, keep and + rename both files according to --conflict-loser and + --conflict-suffix settings. For example, with the default settings, + file.txt on Path1 is renamed file.txt.conflict1 and file.txt on + Path2 is renamed file.txt.conflict2. Both are copied to the opposite + path during the run, so both sides end up with a copy of both files. + (As none is the default, it is not necessary to specify + --conflict-resolve none -- you can just omit the flag.) +- newer - the newer file (by modtime) is considered the winner and is + copied without renaming. The older file (the "loser") is handled + according to --conflict-loser and --conflict-suffix settings (either + renamed or deleted.) For example, if file.txt on Path1 is newer than + file.txt on Path2, the result on both sides (with other default + settings) will be file.txt (winner from Path1) and + file.txt.conflict1 (loser from Path2). +- older - same as newer, except the older file is considered the + winner, and the newer file is considered the loser. +- larger - the larger file (by size) is considered the winner + (regardless of modtime, if any). +- smaller - the smaller file (by size) is considered the winner + (regardless of modtime, if any). +- path1 - the version from Path1 is unconditionally considered the + winner (regardless of modtime and size, if any). This can be useful + if one side is usually more trusted or up-to-date than the other. +- path2 - same as path1, except the path2 version is considered the + winner. + +For all of the above options, note the following: - If either of the +underlying remotes lacks support for the chosen method, it will be +ignored and fall back to none. (For example, if --conflict-resolve newer +is set, but one of the paths uses a remote that doesn't support +modtime.) - If a winner can't be determined because the chosen method's +attribute is missing or equal, it will be ignored and fall back to none. +(For example, if --conflict-resolve newer is set, but the Path1 and +Path2 modtimes are identical, even if the sizes may differ.) - If the +file's content is currently identical on both sides, it is not +considered a "conflict", even if new or changed on both sides since the +prior sync. (For example, if you made a change on one side and then +synced it to the other side by other means.) Therefore, none of the +conflict resolution flags apply in this scenario. - The conflict +resolution flags do not apply during a --resync, as there is no "prior +run" to speak of (but see --resync-mode for similar options.) + +--conflict-loser CHOICE + +--conflict-loser determines what happens to the "loser" of a sync +conflict (when --conflict-resolve determines a winner) or to both files +(when there is no winner.) The currently supported options are: + +- num - (the default) - auto-number the conflicts by automatically + appending the next available number to the --conflict-suffix, in + chronological order. For example, with the default settings, the + first conflict for file.txt will be renamed file.txt.conflict1. If + file.txt.conflict1 already exists, file.txt.conflict2 will be used + instead (etc., up to a maximum of 9223372036854775807 conflicts.) +- pathname - rename the conflicts according to which side they came + from, which was the default behavior prior to v1.66. For example, + with --conflict-suffix path, file.txt from Path1 will be renamed + file.txt.path1, and file.txt from Path2 will be renamed + file.txt.path2. If two non-identical suffixes are provided (ex. + --conflict-suffix cloud,local), the trailing digit is omitted. + Importantly, note that with pathname, there is no auto-numbering + beyond 2, so if file.txt.path2 somehow already exists, it will be + overwritten. Using a dynamic date variable in your --conflict-suffix + (see below) is one possible way to avoid this. Note also that + conflicts-of-conflicts are possible, if the original conflict is not + manually resolved -- for example, if for some reason you edited + file.txt.path1 on both sides, and those edits were different, the + result would be file.txt.path1.path1 and file.txt.path1.path2 (in + addition to file.txt.path2.) +- delete - keep the winner only and delete the loser, instead of + renaming it. If a winner cannot be determined (see + --conflict-resolve for details on how this could happen), delete is + ignored and the default num is used instead (i.e. both versions are + kept and renamed, and neither is deleted.) delete is inherently the + most destructive option, so use it only with care. + +For all of the above options, note that if a winner cannot be determined +(see --conflict-resolve for details on how this could happen), or if +--conflict-resolve is not in use, both files will be renamed. + +--conflict-suffix STRING[,STRING] + +--conflict-suffix controls the suffix that is appended when bisync +renames a --conflict-loser (default: conflict). --conflict-suffix will +accept either one string or two comma-separated strings to assign +different suffixes to Path1 vs. Path2. This may be helpful later in +identifying the source of the conflict. (For example, +--conflict-suffix dropboxconflict,laptopconflict) + +With --conflict-loser num, a number is always appended to the suffix. +With --conflict-loser pathname, a number is appended only when one +suffix is specified (or when two identical suffixes are specified.) i.e. +with --conflict-loser pathname, all of the following would produce +exactly the same result: + + --conflict-suffix path + --conflict-suffix path,path + --conflict-suffix path1,path2 + +Suffixes may be as short as 1 character. By default, the suffix is +appended after any other extensions (ex. file.jpg.conflict1), however, +this can be changed with the --suffix-keep-extension flag (i.e. to +instead result in file.conflict1.jpg). + +--conflict-suffix supports several dynamic date variables when enclosed +in curly braces as globs. This can be helpful to track the date and/or +time that each conflict was handled by bisync. For example: + + --conflict-suffix {DateOnly}-conflict + // result: myfile.txt.2006-01-02-conflict1 + +All of the formats described here and here are supported, but take care +to ensure that your chosen format does not use any characters that are +illegal on your remotes (for example, macOS does not allow colons in +filenames, and slashes are also best avoided as they are often +interpreted as directory separators.) To address this particular issue, +an additional {MacFriendlyTime} (or just {mac}) option is supported, +which results in 2006-01-02 0304PM. + +Note that --conflict-suffix is entirely separate from rclone's main +--sufix flag. This is intentional, as users may wish to use both flags +simultaneously, if also using --backup-dir. + +Finally, note that the default in bisync prior to v1.66 was to rename +conflicts with ..path1 and ..path2 (with two periods, and path instead +of conflict.) Bisync now defaults to a single dot instead of a double +dot, but additional dots can be added by including them in the specified +suffix string. For example, for behavior equivalent to the previous +default, use: + + [--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path + --check-sync Enabled by default, the check-sync function checks that all of the same @@ -19198,39 +21041,51 @@ significantly reduce the sync run times for very large numbers of files. The check may be run manually with --check-sync=only. It runs only the integrity check and terminates without actually synching. -See also: Concurrent modifications +Note that currently, --check-sync only checks listing snapshots and NOT +the actual files on the remotes. Note also that the listing snapshots +will not know about any changes that happened during or after the latest +bisync run, as those will be discovered on the next run. Therefore, +while listings should always match each other at the end of a bisync +run, it is expected that they will not match the underlying remotes, nor +will the remotes match each other, if there were changes during or after +the run. This is normal, and any differences will be detected and synced +on the next run. ---ignore-listing-checksum +For a robust integrity check of the current state of the remotes (as +opposed to just their listing snapshots), consider using check (or +cryptcheck, if at least one path is a crypt remote) instead of +--check-sync, keeping in mind that differences are expected if files +changed during or after your last bisync run. -By default, bisync will retrieve (or generate) checksums (for backends -that support them) when creating the listings for both paths, and store -the checksums in the listing files. --ignore-listing-checksum will -disable this behavior, which may speed things up considerably, -especially on backends (such as local) where hashes must be computed on -the fly instead of retrieved. Please note the following: +For example, a possible sequence could look like this: -- While checksums are (by default) generated and stored in the listing - files, they are NOT currently used for determining diffs (deltas). - It is anticipated that full checksum support will be added in a - future version. -- --ignore-listing-checksum is NOT the same as --ignore-checksum, and - you may wish to use one or the other, or both. In a nutshell: - --ignore-listing-checksum controls whether checksums are considered - when scanning for diffs, while --ignore-checksum controls whether - checksums are considered during the copy/sync operations that - follow, if there ARE diffs. -- Unless --ignore-listing-checksum is passed, bisync currently - computes hashes for one path even when there's no common hash with - the other path (for example, a crypt remote.) -- If both paths support checksums and have a common hash, AND - --ignore-listing-checksum was not specified when creating the - listings, --check-sync=only can be used to compare Path1 vs. Path2 - checksums (as of the time the previous listings were created.) - However, --check-sync=only will NOT include checksums if the - previous listings were generated on a run using - --ignore-listing-checksum. For a more robust integrity check of the - current state, consider using check (or cryptcheck, if at least one - path is a crypt remote.) +1. Normally scheduled bisync run: + + rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient + +2. Periodic independent integrity check (perhaps scheduled nightly or + weekly): + + rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt + +3. If diffs are found, you have some choices to correct them. If one + side is more up-to-date and you want to make the other side match + it, you could run: + + rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v + +(or switch Path1 and Path2 to make Path2 the source-of-truth) + +Or, if neither side is totally up-to-date, you could run a --resync to +bring them back into agreement (but remember that this could cause +deleted files to re-appear.) + +*Note also that rclone check does not currently include empty +directories, so if you want to know if any empty directories are out of +sync, consider alternatively running the above rclone sync command with +--dry-run added. + +See also: Concurrent modifications, --resilient --resilient @@ -19258,7 +21113,115 @@ the time of the next run, that next run will be allowed to proceed. Certain more serious errors will still enforce a --resync lockout, even in --resilient mode, to prevent data loss. -Behavior of --resilient may change in a future version. +Behavior of --resilient may change in a future version. (See also: +--recover, --max-lock, Graceful Shutdown) + +--recover + +If --recover is set, in the event of a sudden interruption or other +un-graceful shutdown, bisync will attempt to automatically recover on +the next run, instead of requiring --resync. Bisync is able to recover +robustly by keeping one "backup" listing at all times, representing the +state of both paths after the last known successful sync. Bisync can +then compare the current state with this snapshot to determine which +changes it needs to retry. Changes that were synced after this snapshot +(during the run that was later interrupted) will appear to bisync as if +they are "new or changed on both sides", but in most cases this is not a +problem, as bisync will simply do its usual "equality check" and learn +that no action needs to be taken on these files, since they are already +identical on both sides. + +In the rare event that a file is synced successfully during a run that +later aborts, and then that same file changes AGAIN before the next run, +bisync will think it is a sync conflict, and handle it accordingly. +(From bisync's perspective, the file has changed on both sides since the +last trusted sync, and the files on either side are not currently +identical.) Therefore, --recover carries with it a slightly increased +chance of having conflicts -- though in practice this is pretty rare, as +the conditions required to cause it are quite specific. This risk can be +reduced by using bisync's "Graceful Shutdown" mode (triggered by sending +SIGINT or Ctrl+C), when you have the choice, instead of forcing a sudden +termination. + +--recover and --resilient are similar, but distinct -- the main +difference is that --resilient is about retrying, while --recover is +about recovering. Most users will probably want both. --resilient allows +retrying when bisync has chosen to abort itself due to safety features +such as failing --check-access or detecting a filter change. --resilient +does not cover external interruptions such as a user shutting down their +computer in the middle of a sync -- that is what --recover is for. + +--max-lock + +Bisync uses lock files as a safety feature to prevent interference from +other bisync runs while it is running. Bisync normally removes these +lock files at the end of a run, but if bisync is abruptly interrupted, +these files will be left behind. By default, they will lock out all +future runs, until the user has a chance to manually check things out +and remove the lock. As an alternative, --max-lock can be used to make +them automatically expire after a certain period of time, so that future +runs are not locked out forever, and auto-recovery is possible. +--max-lock can be any duration 2m or greater (or 0 to disable). If set, +lock files older than this will be considered "expired", and future runs +will be allowed to disregard them and proceed. (Note that the --max-lock +duration must be set by the process that left the lock file -- not the +later one interpreting it.) + +If set, bisync will also "renew" these lock files every +--max-lock minus one minute throughout a run, for extra safety. (For +example, with --max-lock 5m, bisync would renew the lock file (for +another 5 minutes) every 4 minutes until the run has completed.) In +other words, it should not be possible for a lock file to pass its +expiration time while the process that created it is still running -- +and you can therefore be reasonably sure that any expired lock file you +may find was left there by an interrupted run, not one that is still +running and just taking awhile. + +If --max-lock is 0 or not set, the default is that lock files will never +expire, and will block future runs (of these same two bisync paths) +indefinitely. + +For maximum resilience from disruptions, consider setting a relatively +short duration like --max-lock 2m along with --resilient and --recover, +and a relatively frequent cron schedule. The result will be a very +robust "set-it-and-forget-it" bisync run that can automatically bounce +back from almost any interruption it might encounter, without requiring +the user to get involved and run a --resync. (See also: Graceful +Shutdown mode) + +--backup-dir1 and --backup-dir2 + +As of v1.66, --backup-dir is supported in bisync. Because --backup-dir +must be a non-overlapping path on the same remote, Bisync has introduced +new --backup-dir1 and --backup-dir2 flags to support separate +backup-dirs for Path1 and Path2 (bisyncing between different remotes +with --backup-dir would not otherwise be possible.) --backup-dir1 and +--backup-dir2 can use different remotes from each other, but +--backup-dir1 must use the same remote as Path1, and --backup-dir2 must +use the same remote as Path2. Each backup directory must not overlap its +respective bisync Path without being excluded by a filter rule. + +The standard --backup-dir will also work, if both paths use the same +remote (but note that deleted files from both paths would be mixed +together in the same dir). If either --backup-dir1 and --backup-dir2 are +set, they will override --backup-dir. + +Example: + + rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case + +In this example, if the user deletes a file in +/Users/someuser/some/local/path/Bisync, bisync will propagate the delete +to the other side by moving the corresponding file from gdrive:Bisync to +gdrive:BackupDir. If the user deletes a file from gdrive:Bisync, bisync +moves it from /Users/someuser/some/local/path/Bisync to +/Users/someuser/some/local/path/BackupDir. + +In the event of a rename due to a sync conflict, the rename is not +considered a delete, unless a previous conflict with the same name +already exists and would get overwritten. + +See also: --suffix, --suffix-keep-extension Operation @@ -19275,8 +21238,9 @@ Safety measures - Lock file prevents multiple simultaneous runs when taking a while. This can be particularly useful if bisync is run by cron scheduler. -- Handle change conflicts non-destructively by creating ..path1 and - ..path2 file versions. +- Handle change conflicts non-destructively by creating .conflict1, + .conflict2, etc. file versions, according to --conflict-resolve, + --conflict-loser, and --conflict-suffix settings. - File system access health check using RCLONE_TEST files (see the --check-access flag). - Abort on excessive deletes - protects against a failed listing being @@ -19317,45 +21281,50 @@ Normal sync checks Unusual sync checks - ---------------------------------------------------------------------------- - Type Description Result Implementation - ----------------- --------------------- ------------------- ---------------- - Path1 new/changed File is new/changed No change None - AND Path2 on Path1 AND - new/changed AND new/changed on Path2 - Path1 == Path2 AND Path1 version is - currently identical - to Path2 + ------------------------------------------------------------------------------ + Type Description Result Implementation + ----------------- --------------------- -------------------- ----------------- + Path1 new/changed File is new/changed No change None + AND Path2 on Path1 AND + new/changed AND new/changed on Path2 + Path1 == Path2 AND Path1 version is + currently identical + to Path2 - Path1 new AND File is new on Path1 Files renamed to rclone copy - Path2 new AND new on Path2 (and _Path1 and _Path2 _Path2 file to - Path1 version is NOT Path1, - identical to Path2) rclone copy - _Path1 file to - Path2 + Path1 new AND File is new on Path1 Conflicts handled default: + Path2 new AND new on Path2 (and according to rclone copy + Path1 version is NOT --conflict-resolve & renamed + identical to Path2) --conflict-loser Path2.conflict2 + settings file to Path1, + rclone copy + renamed + Path1.conflict1 + file to Path2 - Path2 newer AND File is newer on Files renamed to rclone copy - Path1 changed Path2 AND also _Path1 and _Path2 _Path2 file to - changed Path1, - (newer/older/size) on rclone copy - Path1 (and Path1 _Path1 file to - version is NOT Path2 - identical to Path2) + Path2 newer AND File is newer on Conflicts handled default: + Path1 changed Path2 AND also according to rclone copy + changed --conflict-resolve & renamed + (newer/older/size) on --conflict-loser Path2.conflict2 + Path1 (and Path1 settings file to Path1, + version is NOT rclone copy + identical to Path2) renamed + Path1.conflict1 + file to Path2 - Path2 newer AND File is newer on Path2 version rclone copy - Path1 deleted Path2 AND also survives Path2 to Path1 - deleted on Path1 + Path2 newer AND File is newer on Path2 version rclone copy Path2 + Path1 deleted Path2 AND also survives to Path1 + deleted on Path1 - Path2 deleted AND File is deleted on Path1 version rclone copy - Path1 changed Path2 AND changed survives Path1 to Path2 - (newer/older/size) on - Path1 + Path2 deleted AND File is deleted on Path1 version rclone copy Path1 + Path1 changed Path2 AND changed survives to Path2 + (newer/older/size) on + Path1 - Path1 deleted AND File is deleted on Path2 version rclone copy - Path2 changed Path1 AND changed survives Path2 to Path1 - (newer/older/size) on - Path2 - ---------------------------------------------------------------------------- + Path1 deleted AND File is deleted on Path2 version rclone copy Path2 + Path2 changed Path1 AND changed survives to Path1 + (newer/older/size) on + Path2 + ------------------------------------------------------------------------------ As of rclone v1.64, bisync is now better at detecting false positive sync conflicts, which would previously have resulted in unnecessary @@ -19364,9 +21333,9 @@ to rename (because it is new/changed on both sides), it first checks whether the Path1 and Path2 versions are currently identical (using the same underlying function as check.) If bisync concludes that the files are identical, it will skip them and move on. Otherwise, it will create -renamed ..Path1 and ..Path2 duplicates, as before. This behavior also -improves the experience of renaming directories, as a --resync is no -longer required, so long as the same change has been made on both sides. +renamed duplicates, as before. This behavior also improves the +experience of renaming directories, as a --resync is no longer required, +so long as the same change has been made on both sides. All files changed check @@ -19381,19 +21350,12 @@ the changes. Modification times -Bisync relies on file timestamps to identify changed files and will -refuse to operate if backend lacks the modification time support. - -If you or your application should change the content of a file without -changing the modification time then bisync will not notice the change, -and thus will not copy it to the other side. - -Note that on some cloud storage systems it is not possible to have file -timestamps that match precisely between the local and other filesystems. - -Bisync's approach to this problem is by tracking the changes on each -side separately over time with a local database of files in that side -then applying the resulting changes on the other side. +By default, bisync compares files by modification time and size. If you +or your application should change the content of a file without changing +the modification time and size, then bisync will not notice the change, +and thus will not copy it to the other side. As an alternative, consider +comparing by checksum (if your remotes support it). See --compare for +details. Error handling @@ -19417,7 +21379,7 @@ at ${HOME}/.cache/rclone/bisync/ on Linux. Some errors are considered temporary and re-running the bisync is not blocked. The critical return blocks further bisync runs. -See also: --resilient +See also: --resilient, --recover, --max-lock, Graceful Shutdown Lock file @@ -19428,7 +21390,9 @@ place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of -the blocking process, which may help in debug. +the blocking process, which may help in debug. Lock files can be set to +automatically expire after a certain amount of time, using the +--max-lock flag. Note that while concurrent bisync runs are allowed, be very cautious that there is no overlap in the trees being synched between concurrent @@ -19441,69 +21405,74 @@ successful run, - 1 for a non-critical failing run (a rerun may be successful), - 2 for a critically aborted run (requires a --resync to recover). +Graceful Shutdown + +Bisync has a "Graceful Shutdown" mode which is activated by sending +SIGINT or pressing Ctrl+C during a run. Once triggered, bisync will use +best efforts to exit cleanly before the timer runs out. If bisync is in +the middle of transferring files, it will attempt to cleanly empty its +queue by finishing what it has started but not taking more. If it cannot +do so within 30 seconds, it will cancel the in-progress transfers at +that point and then give itself a maximum of 60 seconds to wrap up, save +its state for next time, and exit. With the -vP flags you will see +constant status updates and a final confirmation of whether or not the +graceful shutdown was successful. + +At any point during the "Graceful Shutdown" sequence, a second SIGINT or +Ctrl+C will trigger an immediate, un-graceful exit, which will leave +things in a messier state. Usually a robust recovery will still be +possible if using --recover mode, otherwise you will need to do a +--resync. + +If you plan to use Graceful Shutdown mode, it is recommended to use +--resilient and --recover, and it is important to NOT use --inplace, +otherwise you risk leaving partially-written files on one side, which +may be confused for real files on the next run. Note also that in the +event of an abrupt interruption, a lock file will be left behind to +block concurrent runs. You will need to delete it before you can proceed +with the next run (or wait for it to expire on its own, if using +--max-lock.) + Limitations Supported backends Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - -SFTP - Yandex Disk +SFTP - Yandex Disk - Crypt It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below. -First release of rclone bisync requires that underlying backend supports -the modification time feature and will refuse to run otherwise. This -limitation will be lifted in a future rclone bisync release. +The first release of rclone bisync required both underlying backends to +support modification times, and refused to run otherwise. This +limitation has been lifted as of v1.66, as bisync now supports comparing +checksum and/or size instead of (or in addition to) modtime. See +--compare for details. Concurrent modifications -When using Local, FTP or SFTP remotes rclone does not create temporary -files at the destination when copying, and thus if the connection is -lost the created file may be corrupt, which will likely propagate back -to the original path on the next sync, resulting in data loss. This will -be solved in a future release, there is no workaround at the moment. +When using Local, FTP or SFTP remotes with --inplace, rclone does not +create temporary files at the destination when copying, and thus if the +connection is lost the created file may be corrupt, which will likely +propagate back to the original path on the next sync, resulting in data +loss. It is therefore recommended to omit --inplace. -Files that change during a bisync run may result in data loss. This has -been seen in a highly dynamic environment, where the filesystem is -getting hammered by running processes during the sync. The currently -recommended solution is to sync at quiet times or filter out unnecessary -directories and files. - -As an alternative approach, consider using --check-sync=false (and -possibly --resilient) to make bisync more forgiving of filesystems that -change during the sync. Be advised that this may cause bisync to miss -events that occur during a bisync run, so it is a good idea to -supplement this with a periodic independent integrity check, and -corrective sync if diffs are found. For example, a possible sequence -could look like this: - -1. Normally scheduled bisync run: - - rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient - -2. Periodic independent integrity check (perhaps scheduled nightly or - weekly): - - rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt - -3. If diffs are found, you have some choices to correct them. If one - side is more up-to-date and you want to make the other side match - it, you could run: - - rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v - -(or switch Path1 and Path2 to make Path2 the source-of-truth) - -Or, if neither side is totally up-to-date, you could run a --resync to -bring them back into agreement (but remember that this could cause -deleted files to re-appear.) - -*Note also that rclone check does not currently include empty -directories, so if you want to know if any empty directories are out of -sync, consider alternatively running the above rclone sync command with ---dry-run added. +Files that change during a bisync run may result in data loss. Prior to +rclone v1.66, this was commonly seen in highly dynamic environments, +where the filesystem was getting hammered by running processes during +the sync. As of rclone v1.66, bisync was redesigned to use a "snapshot" +model, greatly reducing the risks from changes during a sync. Changes +that are not detected during the current sync will now be detected +during the following sync, and will no longer cause the entire run to +throw a critical error. There is additionally a mechanism to mark files +as needing to be internally rechecked next time, for added safety. It +should therefore no longer be necessary to sync only at quiet times -- +however, note that an error can still occur if a file happens to change +at the exact moment it's being read/written by bisync (same as would +happen in rclone sync.) (See also: --ignore-checksum, +--local-no-check-updated) Empty directories @@ -19525,14 +21494,20 @@ but it's still probably best to stick to one or the other, and use Renamed directories -Renaming a folder on the Path1 side results in deleting all files on the -Path2 side and then copying all files again from Path1 to Path2. Bisync -sees this as all files in the old directory name as deleted and all -files in the new directory name as new. Currently, the most effective -and efficient method of renaming a directory is to rename it to the same -name on both sides. (As of rclone v1.64, a --resync is no longer -required after doing so, as bisync will automatically detect that Path1 -and Path2 are in agreement.) +By default, renaming a folder on the Path1 side results in deleting all +files on the Path2 side and then copying all files again from Path1 to +Path2. Bisync sees this as all files in the old directory name as +deleted and all files in the new directory name as new. + +A recommended solution is to use --track-renames, which is now supported +in bisync as of rclone v1.66. Note that --track-renames is not available +during --resync, as --resync does not delete anything (--track-renames +only supports sync, not copy.) + +Otherwise, the most effective and efficient method of renaming a +directory is to rename it to the same name on both sides. (As of +rclone v1.64, a --resync is no longer required after doing so, as bisync +will automatically detect that Path1 and Path2 are in agreement.) --fast-list used by default @@ -19544,28 +21519,21 @@ users with many empty directories. For now, the recommended way to avoid using --fast-list is to add --disable ListR to all bisync commands. The default behavior may change in a future version. -Overridden Configs +Case (and unicode) sensitivity -When rclone detects an overridden config, it adds a suffix like {ABCDE} -on the fly to the internal name of the remote. Bisync follows suit by -including this suffix in its listing filenames. However, this suffix -does not necessarily persist from run to run, especially if different -flags are provided. So if next time the suffix assigned is {FGHIJ}, -bisync will get confused, because it's looking for a listing file with -{FGHIJ}, when the file it wants has {ABCDE}. As a result, it throws -Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run -and refuses to run again until the user runs a --resync (unless using ---resilient). The best workaround at the moment is to set any -backend-specific flags in the config file instead of specifying them -with command flags. (You can still override them as needed for other -rclone commands.) +As of v1.66, case and unicode form differences no longer cause critical +errors, and normalization (when comparing between filesystems) is +handled according to the same flags and defaults as rclone sync. See the +following options (all of which are supported by bisync) to control this +behavior more granularly: - --fix-case - --ignore-case-sync - +--no-unicode-normalization - --local-unicode-normalization and +--local-case-sensitive (caution: these are normally not what you want.) -Case sensitivity - -Synching with case-insensitive filesystems, such as Windows or Box, can -result in file name conflicts. This will be fixed in a future release. -The near-term workaround is to make sure that files on both sides don't -have spelling case differences (Smile.jpg vs. smile.jpg). +Note that in the (probably rare) event that --fix-case is used AND a +file is new/changed on both sides AND the checksums match AND the +filename case does not match, the Path1 filename is considered the +winner, for the purposes of --fix-case (Path2 will be renamed to match +it). Windows support @@ -19844,22 +21812,57 @@ specifically which files are generating complaints. If the error is This file has been identified as malware or spam and cannot be downloaded, consider using the flag --drive-acknowledge-abuse. -Google Doc files +Google Docs (and other files of unknown size) -Google docs exist as virtual files on Google Drive and cannot be -transferred to other filesystems natively. While it is possible to -export a Google doc to a normal file (with .xlsx extension, for -example), it is not possible to import a normal file back into a Google -document. +As of v1.66, Google Docs (including Google Sheets, Slides, etc.) are now +supported in bisync, subject to the same options, defaults, and +limitations as in rclone sync. When bisyncing drive with non-drive +backends, the drive -> non-drive direction is controlled by +--drive-export-formats (default "docx,xlsx,pptx,svg") and the non-drive +-> drive direction is controlled by --drive-import-formats (default +none.) -Bisync's handling of Google Doc files is to flag them in the run log -output for user's attention and ignore them for any file transfers, -deletes, or syncs. They will show up with a length of -1 in the -listings. This bisync run is otherwise successful: +For example, with the default export/import formats, a Google Sheet on +the drive side will be synced to an .xlsx file on the non-drive side. In +the reverse direction, .xlsx files with filenames that match an existing +Google Sheet will be synced to that Google Sheet, while .xlsx files that +do NOT match an existing Google Sheet will be copied to drive as normal +.xlsx files (without conversion to Sheets, although the Google Drive web +browser UI may still give you the option to open it as one.) - 2021/05/11 08:23:15 INFO : Synching Path1 "/path/to/local/tree/base/" with Path2 "GDrive:" - 2021/05/11 08:23:15 INFO : ...path2.lst-new: Ignoring incorrect line: "- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx" - 2021/05/11 08:23:15 INFO : Bisync successful +If --drive-import-formats is set (it's not, by default), then all of the +specified formats will be converted to Google Docs, if there is no +existing Google Doc with a matching name. Caution: such conversion can +be quite lossy, and in most cases it's probably not what you want! + +To bisync Google Docs as URL shortcut links (in a manner similar to +"Drive for Desktop"), use: --drive-export-formats url (or alternatives.) + +Note that these link files cannot be edited on the non-drive side -- you +will get errors if you try to sync an edited link file back to drive. +They CAN be deleted (it will result in deleting the corresponding Google +Doc.) If you create a .url file on the non-drive side that does not +match an existing Google Doc, bisyncing it will just result in copying +the literal .url file over to drive (no Google Doc will be created.) So, +as a general rule of thumb, think of them as read-only placeholders on +the non-drive side, and make all your changes on the drive side. + +Likewise, even with other export-formats, it is best to only move/rename +Google Docs on the drive side. This is because otherwise, bisync will +interpret this as a file deleted and another created, and accordingly, +it will delete the Google Doc and create a new file at the new path. +(Whether or not that new file is a Google Doc depends on +--drive-import-formats.) + +Lastly, take note that all Google Docs on the drive side have a size of +-1 and no checksum. Therefore, they cannot be reliably synced with the +--checksum or --size-only flags. (To be exact: they will still get +created/deleted, and bisync's delta engine will notice changes and queue +them for syncing, but the underlying sync function will consider them +identical and skip them.) To work around this, use the default (modtime +and size) instead of --checksum or --size-only. + +To ignore Google Docs entirely, use --drive-skip-gdocs. Usage examples @@ -20231,6 +22234,54 @@ Unison and synchronization in general. Changelog +v1.66 + +- Copies and deletes are now handled in one operation instead of two +- --track-renames and --backup-dir are now supported +- Partial uploads known issue on local/ftp/sftp has been resolved + (unless using --inplace) +- Final listings are now generated from sync results, to avoid needing + to re-list +- Bisync is now much more resilient to changes that happen during a + bisync run, and far less prone to critical errors / undetected + changes +- Bisync is now capable of rolling a file listing back in cases of + uncertainty, essentially marking the file as needing to be rechecked + next time. +- A few basic terminal colors are now supported, controllable with + --color (AUTO|NEVER|ALWAYS) +- Initial listing snapshots of Path1 and Path2 are now generated + concurrently, using the same "march" infrastructure as check and + sync, for performance improvements and less risk of error. +- Fixed handling of unicode normalization and case insensitivity, + support for --fix-case, --ignore-case-sync, + --no-unicode-normalization +- --resync is now much more efficient (especially for users of + --create-empty-src-dirs) +- Google Docs (and other files of unknown size) are now supported + (with the same options as in sync) +- Equality checks before a sync conflict rename now fall back to + cryptcheck (when possible) or --download, instead of of --size-only, + when check is not available. +- Bisync no longer fails to find the correct listing file when configs + are overridden with backend-specific flags. +- Bisync now fully supports comparing based on any combination of + size, modtime, and checksum, lifting the prior restriction on + backends without modtime support. +- Bisync now supports a "Graceful Shutdown" mode to cleanly cancel a + run early without requiring --resync. +- New --recover flag allows robust recovery in the event of + interruptions, without requiring --resync. +- A new --max-lock setting allows lock files to automatically renew + and expire, for better automatic recovery when a run is interrupted. +- Bisync now supports auto-resolving sync conflicts and customizing + rename behavior with new --conflict-resolve, --conflict-loser, and + --conflict-suffix flags. +- A new --resync-mode flag allows more control over which version of a + file gets kept during a --resync. +- Bisync now supports --retries and --retries-sleep (when --resilient + is set.) + v1.64 - Fixed an issue causing dry runs to inadvertently commit filter @@ -20582,6 +22633,17 @@ Properties: - Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot +--fichier-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FICHIER_DESCRIPTION +- Type: string +- Required: false + Limitations rclone about is not supported by the 1Fichier backend. Backends without @@ -20697,334 +22759,22 @@ Properties: - Type: string - Required: true -Amazon Drive - -Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage -service run by Amazon for consumers. - -Status - -Important: rclone supports Amazon Drive only if you have your own set of -API keys. Unfortunately the Amazon Drive developer program is now closed -to new entries so if you don't already have your own set of keys you -will not be able to use rclone with Amazon Drive. - -For the history on why rclone no longer has a set of Amazon Drive API -keys see the forum. - -If you happen to know anyone who works at Amazon then please ask them to -re-instate rclone into the Amazon Drive developer program - thanks! - -Configuration - -The initial setup for Amazon Drive involves getting a token from Amazon -which you need to do in your browser. rclone config walks you through -it. - -The configuration process for Amazon Drive may involve using an oauth -proxy. This is used to keep the Amazon credentials out of the source -code. The proxy runs in Google's very secure App Engine environment and -doesn't store any credentials which pass through it. - -Since rclone doesn't currently have its own Amazon Drive credentials so -you will either need to have your own client_id and client_secret with -Amazon Drive, or use a third-party oauth proxy in which case you will -need to enter client_id, client_secret, auth_url and token_url. - -Note also if you are not using Amazon's auth_url and token_url, (ie you -filled in something for those) then if setting up on a remote machine -you can only use the copying the config method of configuration - -rclone authorize will not work. - -Here is an example of how to make a remote called remote. First run: - - rclone config - -This will guide you through an interactive setup process: - - No remotes found, make a new one? - n) New remote - r) Rename remote - c) Copy remote - s) Set configuration password - q) Quit config - n/r/c/s/q> n - name> remote - Type of storage to configure. - Choose a number from below, or type in your own value - [snip] - XX / Amazon Drive - \ "amazon cloud drive" - [snip] - Storage> amazon cloud drive - Amazon Application Client Id - required. - client_id> your client ID goes here - Amazon Application Client Secret - required. - client_secret> your client secret goes here - Auth server URL - leave blank to use Amazon's. - auth_url> Optional auth URL - Token server url - leave blank to use Amazon's. - token_url> Optional token URL - Remote config - Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config. - Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access - If not sure try Y. If Y failed, try N. - y) Yes - n) No - y/n> y - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... - Got code - -------------------- - [remote] - client_id = your client ID goes here - client_secret = your client secret goes here - auth_url = Optional auth URL - token_url = Optional token URL - token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"} - -------------------- - y) Yes this is OK - e) Edit this remote - d) Delete this remote - y/e/d> y - -See the remote setup docs for how to set it up on a machine with no -Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from Amazon. This only runs from the moment it opens -your browser to the moment you get back the verification code. This is -on http://127.0.0.1:53682/ and this it may require you to unblock it -temporarily if you are running a host firewall. - -Once configured you can then use rclone like this, - -List directories in top level of your Amazon Drive - - rclone lsd remote: - -List all the files in your Amazon Drive - - rclone ls remote: - -To copy a local directory to an Amazon Drive directory called backup - - rclone copy /home/source remote:backup - -Modification times and hashes - -Amazon Drive doesn't allow modification times to be changed via the API -so these won't be accurate or used for syncing. - -It does support the MD5 hash algorithm, so for a more accurate sync, you -can use the --checksum flag. - -Restricted filename characters - - Character Value Replacement - ----------- ------- ------------- - NUL 0x00 ␀ - / 0x2F / - -Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON -strings. - -Deleting files - -Any files you delete with rclone will end up in the trash. Amazon don't -provide an API to permanently delete files, nor to empty the trash, so -you will have to do that with one of Amazon's apps or via the Amazon -Drive website. As of November 17, 2016, files are automatically deleted -by Amazon from the trash after 30 days. - -Using with non .com Amazon accounts - -Let's say you usually use amazon.co.uk. When you authenticate with -rclone it will take you to an amazon.com page to log in. Your -amazon.co.uk email and password should work here just fine. - -Standard options - -Here are the Standard options specific to amazon cloud drive (Amazon -Drive). - ---acd-client-id - -OAuth Client Id. - -Leave blank normally. - -Properties: - -- Config: client_id -- Env Var: RCLONE_ACD_CLIENT_ID -- Type: string -- Required: false - ---acd-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_ACD_CLIENT_SECRET -- Type: string -- Required: false - Advanced options -Here are the Advanced options specific to amazon cloud drive (Amazon -Drive). +Here are the Advanced options specific to alias (Alias for an existing +remote). ---acd-token +--alias-description -OAuth Access Token as a JSON blob. +Description of the remote Properties: -- Config: token -- Env Var: RCLONE_ACD_TOKEN +- Config: description +- Env Var: RCLONE_ALIAS_DESCRIPTION - Type: string - Required: false ---acd-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_ACD_AUTH_URL -- Type: string -- Required: false - ---acd-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_ACD_TOKEN_URL -- Type: string -- Required: false - ---acd-checkpoint - -Checkpoint for internal polling (debug). - -Properties: - -- Config: checkpoint -- Env Var: RCLONE_ACD_CHECKPOINT -- Type: string -- Required: false - ---acd-upload-wait-per-gb - -Additional time per GiB to wait after a failed complete upload to see if -it appears. - -Sometimes Amazon Drive gives an error when a file has been fully -uploaded but the file appears anyway after a little while. This happens -sometimes for files over 1 GiB in size and nearly every time for files -bigger than 10 GiB. This parameter controls the time rclone waits for -the file to appear. - -The default value for this parameter is 3 minutes per GiB, so by default -it will wait 3 minutes for every GiB uploaded to see if the file -appears. - -You can disable this feature by setting it to 0. This may cause conflict -errors as rclone retries the failed upload but the file will most likely -appear correctly eventually. - -These values were determined empirically by observing lots of uploads of -big files for a range of file sizes. - -Upload with the "-v" flag to see more info about what rclone is doing in -this situation. - -Properties: - -- Config: upload_wait_per_gb -- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB -- Type: Duration -- Default: 3m0s - ---acd-templink-threshold - -Files >= this size will be downloaded via their tempLink. - -Files this size or more will be downloaded via their "tempLink". This is -to work around a problem with Amazon Drive which blocks downloads of -files bigger than about 10 GiB. The default for this is 9 GiB which -shouldn't need to be changed. - -To download files above this threshold, rclone requests a "tempLink" -which downloads the file through a temporary URL directly from the -underlying S3 storage. - -Properties: - -- Config: templink_threshold -- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD -- Type: SizeSuffix -- Default: 9Gi - ---acd-encoding - -The encoding for the backend. - -See the encoding section in the overview for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_ACD_ENCODING -- Type: Encoding -- Default: Slash,InvalidUtf8,Dot - -Limitations - -Note that Amazon Drive is case insensitive so you can't have a file -called "Hello.doc" and one called "hello.doc". - -Amazon Drive has rate limiting so you may notice errors in the sync (429 -errors). rclone will automatically retry the sync up to 3 times by -default (see --retries flag) which should hopefully work around this -problem. - -Amazon Drive has an internal limit of file sizes that can be uploaded to -the service. This limit is not officially published, but all files -larger than this will fail. - -At the time of writing (Jan 2016) is in the area of 50 GiB per file. -This means that larger files are likely to fail. - -Unfortunately there is no way for rclone to see that this failure is -because of file size, so it will retry the operation, as any other -failure. To avoid this problem, use --max-size 50000M option to limit -the maximum size of uploaded files. Note that --max-size does not split -files into segments, it only ignores files over this size. - -rclone about is not supported by the Amazon Drive backend. Backends -without this capability cannot determine free space for an rclone mount -or use policy mfs (most free space) as a member of an rclone union -remote. - -See List of backends that do not support rclone about and rclone about - Amazon S3 Storage Providers The S3 backend can be used with a number of different providers: @@ -21103,7 +22853,7 @@ This will guide you through an interactive setup process. Type of storage to configure. Choose a number from below, or type in your own value [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Liara, Minio, and Tencent COS + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" [snip] Storage> s3 @@ -21602,6 +23352,7 @@ permissions are required to be available on the bucket being written to: - GetObject - PutObject - PutObjectACL +- CreateBucket (unless using s3-no-check-bucket) When using the lsd subcommand, the ListAllMyBuckets permission is required. @@ -21642,6 +23393,8 @@ Notes on above: that USER_NAME has been created. 2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects. +3. When using s3-no-check-bucket and the bucket already exsits, the + "arn:aws:s3:::BUCKET_NAME" doesn't have to be included. For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync. @@ -22397,10 +24150,10 @@ Properties: --s3-upload-concurrency -Concurrency for multipart uploads. +Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded -concurrently. +concurrently for multipart uploads and copies. If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing @@ -22448,6 +24201,19 @@ Properties: - Type: bool - Default: false +--s3-use-dual-stack + +If true use AWS S3 dual-stack endpoint (IPv6 support). + +See AWS Docs on Dualstack Endpoints + +Properties: + +- Config: use_dual_stack +- Env Var: RCLONE_S3_USE_DUAL_STACK +- Type: bool +- Default: false + --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. @@ -22748,6 +24514,25 @@ Properties: - Type: Time - Default: off +--s3-version-deleted + +Show deleted file markers when using versions. + +This shows deleted file markers in the listing when using versions. +These will appear as 0 size files. The only operation which can be +performed on them is deletion. + +Deleting a delete marker will reveal the previous version. + +Deleted files will always show with a timestamp. + +Properties: + +- Config: version_deleted +- Env Var: RCLONE_S3_VERSION_DELETED +- Type: bool +- Default: false + --s3-decompress If set this will decompress gzip encoded objects. @@ -22894,6 +24679,17 @@ Properties: - Type: Tristate - Default: unset +--s3-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_S3_DESCRIPTION +- Type: string +- Required: false + Metadata User metadata is stored as x-amz-meta- keys. S3 metadata keys are case @@ -23473,10 +25269,10 @@ Or you can also configure via the interactive command line: Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] - Storage> 5 + Storage> s3 Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -23599,18 +25395,11 @@ To configure access to IBM COS S3, follow the steps below: 3. Select "s3" storage. Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \ "alias" - 2 / Amazon Drive - \ "amazon cloud drive" - 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, Liara, ArvanCloud, Minio, IBM COS) - \ "s3" - 4 / Backblaze B2 - \ "b2" [snip] - 23 / HTTP - \ "http" - Storage> 3 + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" + [snip] + Storage> s3 4. Select IBM COS as the S3 Storage Provider. @@ -23764,7 +25553,7 @@ This will guide you through an interactive setup process. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -23870,7 +25659,7 @@ Type s3 to choose the connection type: Type of storage to configure. Choose a number from below, or type in your own value. [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -24096,15 +25885,8 @@ To configure access to Qiniu Kodo, follow the steps below: 3. Select s3 storage. Choose a number from below, or type in your own value - 1 / 1Fichier - \ (fichier) - 2 / Akamai NetStorage - \ (netstorage) - 3 / Alias for an existing remote - \ (alias) - 4 / Amazon Drive - \ (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -24364,7 +26146,7 @@ Choose s3 backend Type of storage to configure. Choose a number from below, or type in your own value. [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -24638,7 +26420,7 @@ This will guide you through an interactive setup process. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" [snip] Storage> s3 @@ -24746,7 +26528,7 @@ This will guide you through an interactive setup process. Type of storage to configure. Choose a number from below, or type in your own value. ... - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) ... Storage> s3 @@ -24998,15 +26780,8 @@ To configure access to Leviia, follow the steps below: 3. Select s3 storage. Choose a number from below, or type in your own value - 1 / 1Fichier - \ (fichier) - 2 / Akamai NetStorage - \ (netstorage) - 3 / Alias for an existing remote - \ (alias) - 4 / Amazon Drive - \ (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 @@ -25207,7 +26982,7 @@ This will guide you through an interactive setup process. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others + XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others \ (s3) [snip] Storage> s3 @@ -25444,13 +27219,8 @@ To configure access to Tencent COS, follow the steps below: 3. Select s3 storage. Choose a number from below, or type in your own value - 1 / 1Fichier - \ "fichier" - 2 / Alias for an existing remote - \ "alias" - 3 / Amazon Drive - \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" [snip] Storage> s3 @@ -25839,7 +27609,7 @@ This will guide you through an interactive setup process. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi + XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3" Storage> s3 @@ -26445,9 +28215,12 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx #### --b2-download-auth-duration - Time before the authorization token will expire in s or suffix ms|s|m|h|d. + Time before the public link authorization token will expire in s or suffix ms|s|m|h|d. + + This is used in combination with "rclone link" for making files + accessible to the public and sets the duration before the download + authorization token will expire. - The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. Properties: @@ -26523,6 +28296,17 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + #### --b2-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_B2_DESCRIPTION + - Type: string + - Required: false + ## Backend commands Here are the commands specific to the b2 backend. @@ -27015,6 +28799,17 @@ c) Delete this remote y/e/d> y - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot + #### --box-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_BOX_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -27674,6 +29469,17 @@ dummyusername plex_password = *** ENCRYPTED *** chunk_size = 5M info_age - Type: Duration - Default: 1s + #### --cache-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_CACHE_DESCRIPTION + - Type: string + - Required: false + ## Backend commands Here are the commands specific to the cache backend. @@ -28137,6 +29943,17 @@ this remote y/e/d> y - If meta format is set to "none", rename transactions will always be used. - This method is EXPERIMENTAL, don't use on production systems. + #### --chunker-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_CHUNKER_DESCRIPTION + - Type: string + - Required: false + # Citrix ShareFile @@ -28409,6 +30226,17 @@ this remote y/e/d> y - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot + #### --sharefile-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_SHAREFILE_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -28929,6 +30757,22 @@ subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin - Type: bool - Default: false + #### --crypt-strict-names + + If set, this will raise an error when crypt comes across a filename that can't be decrypted. + + (By default, rclone will just log a NOTICE and continue as normal.) + This can happen if encrypted and unencrypted files are stored in the same + directory (which is not recommended.) It may also indicate a more serious + problem that should be investigated. + + Properties: + + - Config: strict_names + - Env Var: RCLONE_CRYPT_STRICT_NAMES + - Type: bool + - Default: false + #### --crypt-filename-encoding How to encode the encrypted filename to text string. @@ -28966,6 +30810,17 @@ subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin - Type: string - Default: ".bin" + #### --crypt-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_CRYPT_DESCRIPTION + - Type: string + - Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -29134,7 +30989,7 @@ subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin * we strip the padding character `=` `base32` is used rather than the more efficient `base64` so rclone can be - used on case insensitive remotes (e.g. Windows, Amazon Drive). + used on case insensitive remotes (e.g. Windows, Box, Dropbox, Onedrive etc). ### Key derivation @@ -29280,6 +31135,17 @@ y/e/d> y - Type: SizeSuffix - Default: 20Mi + #### --compress-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_COMPRESS_DESCRIPTION + - Type: string + - Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -29402,6 +31268,21 @@ files=drive:important/files -------------------- y) Yes this is OK - Type: SpaceSepList - Default: + ### Advanced options + + Here are the Advanced options specific to combine (Combine several remotes into one). + + #### --combine-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_COMBINE_DESCRIPTION + - Type: string + - Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -29843,6 +31724,17 @@ s) Delete this remote y/e/d> y - Type: Duration - Default: 10m0s + #### --dropbox-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_DROPBOX_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -30135,6 +32027,17 @@ $ rclone lsf --dirs-only -Fip --csv filefabric: 120673758,Burnt PDFs/ - Type: Encoding - Default: Slash,Del,Ctl,InvalidUtf8,Dot + #### --filefabric-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_FILEFABRIC_DESCRIPTION + - Type: string + - Required: false + # FTP @@ -30551,6 +32454,17 @@ this remote y/e/d> y - "Ctl,LeftPeriod,Slash" - VsFTPd can't handle file names starting with dot + #### --ftp-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_FTP_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -31243,6 +33157,17 @@ c) Delete this remote y/e/d> y - Type: Encoding - Default: Slash,CrLf,InvalidUtf8,Dot + #### --gcs-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_GCS_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -32577,10 +34502,23 @@ rclone lsjson -vv -R --checkers=6 gdrive:folder - "true" - Get GCP IAM credentials from the environment (env vars or IAM). + #### --drive-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_DRIVE_DESCRIPTION + - Type: string + - Required: false + ### Metadata User metadata is stored in the properties field of the drive object. + Metadata is supported on files and directories. + Here are the possible system metadata items for the drive backend. | Name | Help | Type | Example | Read Only | @@ -33224,6 +35162,14 @@ will count towards storage in your Google Account. - Config: batch_commit_timeout - Env Var: RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT - Type: Duration - Default: 10m0s + #### --gphotos-description + + Description of the remote + + Properties: + + - Config: description - Env Var: RCLONE_GPHOTOS_DESCRIPTION - Type: string - Required: false + ## Limitations Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. @@ -33452,6 +35398,17 @@ remote:/path/to/sum.sha1 - Type: SizeSuffix - Default: 0 + #### --hasher-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_HASHER_DESCRIPTION + - Type: string + - Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -33766,6 +35723,17 @@ docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p - Type: Encoding - Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot + #### --hdfs-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_HDFS_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -34164,6 +36132,17 @@ Delete this remote y/e/d> y - Type: Encoding - Default: Slash,Dot + #### --hidrive-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_HIDRIVE_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -34376,6 +36355,17 @@ k) Quit config e/n/d/r/c/s/q> q - Type: bool - Default: false + #### --http-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_HTTP_DESCRIPTION + - Type: string + - Required: false + ## Backend commands Here are the commands specific to the http backend. @@ -34591,6 +36581,17 @@ rclone ls imagekit-media-library:directory - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket + #### --imagekit-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_IMAGEKIT_DESCRIPTION + - Type: string + - Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -34838,6 +36839,17 @@ remote d) Delete this remote y/e/d> y - Type: Encoding - Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot + #### --internetarchive-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_INTERNETARCHIVE_DESCRIPTION + - Type: string + - Required: false + ### Metadata Metadata fields provided by Internet Archive. @@ -35265,6 +37277,17 @@ Edit this remote d) Delete this remote y/e/d> y - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot + #### --jottacloud-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_JOTTACLOUD_DESCRIPTION + - Type: string + - Required: false + ### Metadata Jottacloud has limited support for metadata, currently an extended set of timestamps. @@ -35469,6 +37492,17 @@ is OK (default) e) Edit this remote d) Delete this remote y/e/d> y - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + #### --koofr-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_KOOFR_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -35590,6 +37624,21 @@ remote d) Delete this remote y/e/d> y - Type: string - Required: true + ### Advanced options + + Here are the Advanced options specific to linkbox (Linkbox). + + #### --linkbox-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_LINKBOX_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -35969,6 +38018,17 @@ this remote d) Delete this remote y/e/d> y - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot + #### --mailru-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_MAILRU_DESCRIPTION + - Type: string + - Required: false + ## Limitations @@ -36227,6 +38287,17 @@ me@example.com:/$ - Type: Encoding - Default: Slash,InvalidUtf8,Dot + #### --mega-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_MEGA_DESCRIPTION + - Type: string + - Required: false + ### Process `killed` @@ -36292,6 +38363,21 @@ a) Delete this remote y/e/d> y set](https://rclone.org/overview/#restricted-characters). + ### Advanced options + + Here are the Advanced options specific to memory (In memory object storage system.). + + #### --memory-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_MEMORY_DESCRIPTION + - Type: string + - Required: false + # Akamai NetStorage @@ -36504,6 +38590,17 @@ Edit this remote d) Delete this remote y/e/d> y - "https" - HTTPS protocol + #### --netstorage-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_NETSTORAGE_DESCRIPTION + - Type: string + - Required: false + ## Backend commands Here are the commands specific to the netstorage backend. @@ -37347,6 +39444,35 @@ Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y - Type: bool - Default: false + #### --azureblob-delete-snapshots + + Set to specify how to deal with snapshots on blob deletion. + + Properties: + + - Config: delete_snapshots + - Env Var: RCLONE_AZUREBLOB_DELETE_SNAPSHOTS + - Type: string + - Required: false + - Choices: + - "" + - By default, the delete operation fails if a blob has snapshots + - "include" + - Specify 'include' to remove the root blob and all its snapshots + - "only" + - Specify 'only' to remove only the snapshots but keep the root blob. + + #### --azureblob-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_AZUREBLOB_DESCRIPTION + - Type: string + - Required: false + ### Custom upload headers @@ -38043,6 +40169,17 @@ Delete this remote y/e/d> - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot + #### --azurefiles-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_AZUREFILES_DESCRIPTION + - Type: string + - Required: false + ### Custom upload headers @@ -38652,7 +40789,7 @@ e) Delete this remote y/e/d> y If set rclone will use delta listing to implement recursive listings. - If this flag is set the the onedrive backend will advertise `ListR` + If this flag is set the onedrive backend will advertise `ListR` support for recursive listings. Setting this flag speeds up these things greatly: @@ -38685,6 +40822,30 @@ e) Delete this remote y/e/d> y - Type: bool - Default: false + #### --onedrive-metadata-permissions + + Control whether permissions should be read or written in metadata. + + Reading permissions metadata from files can be done quickly, but it + isn't always desirable to set the permissions from the metadata. + + + Properties: + + - Config: metadata_permissions + - Env Var: RCLONE_ONEDRIVE_METADATA_PERMISSIONS + - Type: Bits + - Default: off + - Examples: + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "read,write" + - Read and Write the value. + #### --onedrive-encoding The encoding for the backend. @@ -38698,4470 +40859,5304 @@ e) Delete this remote y/e/d> y - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot + #### --onedrive-description + + Description of the remote + + Properties: + + - Config: description + - Env Var: RCLONE_ONEDRIVE_DESCRIPTION + - Type: string + - Required: false + + ### Metadata + + OneDrive supports System Metadata (not User Metadata, as of this writing) for + both files and directories. Much of the metadata is read-only, and there are some + differences between OneDrive Personal and Business (see table below for + details). + + Permissions are also supported, if `--onedrive-metadata-permissions` is set. The + accepted values for `--onedrive-metadata-permissions` are `read`, `write`, + `read,write`, and `off` (the default). `write` supports adding new permissions, + updating the "role" of existing permissions, and removing permissions. Updating + and removing require the Permission ID to be known, so it is recommended to use + `read,write` instead of `write` if you wish to update/remove permissions. + + Permissions are read/written in JSON format using the same schema as the + [OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online), + which differs slightly between OneDrive Personal and Business. + + Example for OneDrive Personal: + ```json + [ + { + "id": "1234567890ABC!123", + "grantedTo": { + "user": { + "id": "ryan@contoso.com" + }, + "application": {}, + "device": {} + }, + "invitation": { + "email": "ryan@contoso.com" + }, + "link": { + "webUrl": "https://1drv.ms/t/s!1234567890ABC" + }, + "roles": [ + "read" + ], + "shareId": "s!1234567890ABC" + } + ] + +Example for OneDrive Business: + + [ + { + "id": "48d31887-5fad-4d73-a9f5-3c356e68a038", + "grantedToIdentities": [ + { + "user": { + "displayName": "ryan@contoso.com" + }, + "application": {}, + "device": {} + } + ], + "link": { + "type": "view", + "scope": "users", + "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s" + }, + "roles": [ + "read" + ], + "shareId": "u!LKj1lkdlals90j1nlkascl" + }, + { + "id": "5D33DD65C6932946", + "grantedTo": { + "user": { + "displayName": "John Doe", + "id": "efee1b77-fb3b-4f65-99d6-274c11914d12" + }, + "application": {}, + "device": {} + }, + "roles": [ + "owner" + ], + "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U" + } + ] + +To write permissions, pass in a "permissions" metadata key using this +same format. The --metadata-mapper tool can be very helpful for this. + +When adding permissions, an email address can be provided in the User.ID +or DisplayName properties of grantedTo or grantedToIdentities. +Alternatively, an ObjectID can be provided in User.ID. At least one +valid recipient must be provided in order to add a permission for a +user. Creating a Public Link is also supported, if Link.Scope is set to +"anonymous". + +Example request to add a "read" permission: + + [ + { + "id": "", + "grantedTo": { + "user": {}, + "application": {}, + "device": {} + }, + "grantedToIdentities": [ + { + "user": { + "id": "ryan@contoso.com" + }, + "application": {}, + "device": {} + } + ], + "roles": [ + "read" + ] + } + ] + +Note that adding a permission can fail if a conflicting permission +already exists for the file/folder. + +To update an existing permission, include both the Permission ID and the +new roles to be assigned. roles is the only property that can be +changed. + +To remove permissions, pass in a blob containing only the permissions +you wish to keep (which can be empty, to remove all.) + +Note that both reading and writing permissions requires extra API calls, +so if you don't need to read or write permissions it is recommended to +omit --onedrive-metadata-permissions. + +Metadata and permissions are supported for Folders (directories) as well +as Files. Note that setting the mtime or btime on a Folder requires one +extra API call on OneDrive Business only. + +OneDrive does not currently support User Metadata. When writing +metadata, only writeable system properties will be written -- any +read-only or unrecognized keys passed in will be ignored. + +TIP: to see the metadata and permissions for any file or folder, run: + + rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read + +Here are the possible system metadata items for the onedrive backend. + + ------------------------------------------------------------------------------------------------------------------------------------------ + Name Help Type Example Read Only + ------------------------------- ---------------------------------- ----------- -------------------------------------- -------------------- + btime Time of file birth (creation) with RFC 3339 2006-01-02T15:04:05Z N + S accuracy (mS for OneDrive + Personal). + + content-type The MIME type of the file. string text/plain Y + + created-by-display-name Display name of the user that string John Doe Y + created the item. + + created-by-id ID of the user that created the string 48d31887-5fad-4d73-a9f5-3c356e68a038 Y + item. + + description A short description of the file. string Contract for signing N + Max 1024 characters. Only + supported for OneDrive Personal. + + id The unique identifier of the item string 01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K Y + within OneDrive. + + last-modified-by-display-name Display name of the user that last string John Doe Y + modified the item. + + last-modified-by-id ID of the user that last modified string 48d31887-5fad-4d73-a9f5-3c356e68a038 Y + the item. + + malware-detected Whether OneDrive has detected that boolean true Y + the item contains malware. + + mtime Time of last modification with S RFC 3339 2006-01-02T15:04:05Z N + accuracy (mS for OneDrive + Personal). + package-type If present, indicates that this string oneNote Y + item is a package instead of a + folder or file. Packages are + treated like files in some + contexts and folders in others. - ## Limitations + permissions Permissions in a JSON dump of JSON {} N + OneDrive format. Enable with + --onedrive-metadata-permissions. + Properties: id, grantedTo, + grantedToIdentities, invitation, + inheritedFrom, link, roles, + shareId - If you don't use rclone for 90 days the refresh token will - expire. This will result in authorization problems. This is easy to - fix by running the `rclone config reconnect remote:` command to get a - new token and refresh token. + shared-by-id ID of the user that shared the string 48d31887-5fad-4d73-a9f5-3c356e68a038 Y + item (if shared). + + shared-owner-id ID of the owner of the shared item string 48d31887-5fad-4d73-a9f5-3c356e68a038 Y + (if shared). - ### Naming + shared-scope If shared, indicates the scope of string users Y + how the item is shared: anonymous, + organization, or users. - Note that OneDrive is case insensitive so you can't have a - file called "Hello.doc" and one called "hello.doc". + shared-time Time when the item was shared, RFC 3339 2006-01-02T15:04:05Z Y + with S accuracy (mS for OneDrive + Personal). - There are quite a few characters that can't be in OneDrive file - names. These can't occur on Windows platforms, but on non-Windows - platforms they are common. Rclone will map these names to and from an - identical looking unicode equivalent. For example if a file has a `?` - in it will be mapped to `?` instead. + utime Time of upload with S accuracy (mS RFC 3339 2006-01-02T15:04:05Z Y + for OneDrive Personal). + ------------------------------------------------------------------------------------------------------------------------------------------ - ### File sizes +See the metadata docs for more info. - The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). +Limitations - ### Path length +If you don't use rclone for 90 days the refresh token will expire. This +will result in authorization problems. This is easy to fix by running +the rclone config reconnect remote: command to get a new token and +refresh token. - The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. +Naming - ### Number of files +Note that OneDrive is case insensitive so you can't have a file called +"Hello.doc" and one called "hello.doc". - OneDrive seems to be OK with at least 50,000 files in a folder, but at - 100,000 rclone will get errors listing the directory like `couldn’t - list files: UnknownError:`. See - [#2707](https://github.com/rclone/rclone/issues/2707) for more info. +There are quite a few characters that can't be in OneDrive file names. +These can't occur on Windows platforms, but on non-Windows platforms +they are common. Rclone will map these names to and from an identical +looking unicode equivalent. For example if a file has a ? in it will be +mapped to ? instead. - An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). +File sizes - ## Versions +The largest allowed file size is 250 GiB for both OneDrive Personal and +OneDrive for Business (Updated 13 Jan 2021). - Every change in a file OneDrive causes the service to create a new - version of the file. This counts against a users quota. For - example changing the modification time of a file creates a second - version, so the file apparently uses twice the space. +Path length - For example the `copy` command is affected by this as rclone copies - the file and then afterwards sets the modification time to match the - source file which uses another version. +The entire path, including the file name, must contain fewer than 400 +characters for OneDrive, OneDrive for Business and SharePoint Online. If +you are encrypting file and folder names with rclone, you may want to +pay attention to this limitation because the encrypted names are +typically longer than the original ones. - You can use the `rclone cleanup` command (see below) to remove all old - versions. +Number of files - Or you can set the `no_versions` parameter to `true` and rclone will - remove versions after operations which create new versions. This takes - extra transactions so only enable it if you need it. +OneDrive seems to be OK with at least 50,000 files in a folder, but at +100,000 rclone will get errors listing the directory like +couldn’t list files: UnknownError:. See #2707 for more info. - **Note** At the time of writing Onedrive Personal creates versions - (but not for setting the modification time) but the API for removing - them returns "API not found" so cleanup and `no_versions` should not - be used on Onedrive Personal. +An official document about the limitations for different types of +OneDrive can be found here. - ### Disabling versioning +Versions - Starting October 2018, users will no longer be able to - disable versioning by default. This is because Microsoft has brought - an - [update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) - to the mechanism. To change this new default setting, a PowerShell - command is required to be run by a SharePoint admin. If you are an - admin, you can run these commands in PowerShell to change that - setting: +Every change in a file OneDrive causes the service to create a new +version of the file. This counts against a users quota. For example +changing the modification time of a file creates a second version, so +the file apparently uses twice the space. - 1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already) - 2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking` - 3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials) - 4. `Set-SPOTenant -EnableMinimumVersionRequirement $False` - 5. `Disconnect-SPOService` (to disconnect from the server) +For example the copy command is affected by this as rclone copies the +file and then afterwards sets the modification time to match the source +file which uses another version. - *Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.* +You can use the rclone cleanup command (see below) to remove all old +versions. + +Or you can set the no_versions parameter to true and rclone will remove +versions after operations which create new versions. This takes extra +transactions so only enable it if you need it. + +Note At the time of writing Onedrive Personal creates versions (but not +for setting the modification time) but the API for removing them returns +"API not found" so cleanup and no_versions should not be used on +Onedrive Personal. + +Disabling versioning + +Starting October 2018, users will no longer be able to disable +versioning by default. This is because Microsoft has brought an update +to the mechanism. To change this new default setting, a PowerShell +command is required to be run by a SharePoint admin. If you are an +admin, you can run these commands in PowerShell to change that setting: - User [Weropol](https://github.com/Weropol) has found a method to disable - versioning on OneDrive +1. Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case + you haven't installed this already) +2. Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking +3. Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM + (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this + will prompt for your credentials) +4. Set-SPOTenant -EnableMinimumVersionRequirement $False +5. Disconnect-SPOService (to disconnect from the server) - 1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. - 2. Click Site settings. - 3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. - 4. Click Customize "Documents". - 5. Click General Settings > Versioning Settings. - 6. Under Document Version History select the option No versioning. - Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. - 7. Apply the changes by clicking OK. - 8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) - 9. Restore the versioning settings after using rclone. (Optional) +Below are the steps for normal users to disable versioning. If you don't +see the "No Versioning" option, make sure the above requirements are +met. - ## Cleanup +User Weropol has found a method to disable versioning on OneDrive - OneDrive supports `rclone cleanup` which causes rclone to look through - every file under the path supplied and delete all version but the - current version. Because this involves traversing all the files, then - querying each file for versions it can be quite slow. Rclone does - `--checkers` tests in parallel. The command also supports `--interactive`/`i` - or `--dry-run` which is a great way to see what it would do. +1. Open the settings menu by clicking on the gear symbol at the top of + the OneDrive Business page. +2. Click Site settings. +3. Once on the Site settings page, navigate to Site Administration > + Site libraries and lists. +4. Click Customize "Documents". +5. Click General Settings > Versioning Settings. +6. Under Document Version History select the option No versioning. + Note: This will disable the creation of new file versions, but will + not remove any previous versions. Your documents are safe. +7. Apply the changes by clicking OK. +8. Use rclone to upload or modify files. (I also use the + --no-update-modtime flag) +9. Restore the versioning settings after using rclone. (Optional) + +Cleanup + +OneDrive supports rclone cleanup which causes rclone to look through +every file under the path supplied and delete all version but the +current version. Because this involves traversing all the files, then +querying each file for versions it can be quite slow. Rclone does +--checkers tests in parallel. The command also supports --interactive/i +or --dry-run which is a great way to see what it would do. + + rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir + rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir + +NB Onedrive personal can't currently delete versions + +Troubleshooting + +Excessive throttling or blocked on SharePoint + +If you experience excessive throttling or is being blocked on SharePoint +then it may help to set the user agent explicitly with a flag like this: +--user-agent "ISV|rclone.org|rclone/v1.55.1" + +The specific details can be found in the Microsoft document: Avoid +getting throttled or blocked in SharePoint Online + +Unexpected file size/hash differences on Sharepoint + +It is a known issue that Sharepoint (not OneDrive or OneDrive for +Business) silently modifies uploaded files, mainly Office files (.docx, +.xlsx, etc.), causing file size and hash checks to fail. There are also +other situations that will cause OneDrive to report inconsistent file +sizes. To use rclone with such affected files on Sharepoint, you may +disable these checks with the following command line arguments: + + --ignore-checksum --ignore-size + +Alternatively, if you have write access to the OneDrive files, it may be +possible to fix this problem for certain files, by attempting the steps +below. Open the web interface for OneDrive and find the affected files +(which will be in the error messages/log for rclone). Simply click on +each of these files, causing OneDrive to open them on the web. This will +cause each file to be converted in place to a format that is +functionally equivalent but which will no longer trigger the size +discrepancy. Once all problematic files are converted you will no longer +need the ignore options above. + +Replacing/deleting existing files on Sharepoint gets "item not found" + +It is a known issue that Sharepoint (not OneDrive or OneDrive for +Business) may return "item not found" errors when users try to replace +or delete uploaded files; this seems to mainly affect Office files +(.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a +workaround, you may use the --backup-dir command line +argument so rclone moves the files to be replaced/deleted into a given +backup directory (instead of directly replacing/deleting them). For +example, to instruct rclone to move the files into the directory +rclone-backup-dir on backend mysharepoint, you may use: - rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir - rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir + --backup-dir mysharepoint:rclone-backup-dir - **NB** Onedrive personal can't currently delete versions +access_denied (AADSTS65005) - ## Troubleshooting ## + Error: access_denied + Code: AADSTS65005 + Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. - ### Excessive throttling or blocked on SharePoint +This means that rclone can't use the OneDrive for Business API with your +account. You can't do much about it, maybe write an email to your +admins. - If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"` +However, there are other ways to interact with your OneDrive account. +Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint - The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) +invalid_grant (AADSTS50076) - ### Unexpected file size/hash differences on Sharepoint #### + Error: invalid_grant + Code: AADSTS50076 + Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'. - It is a - [known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) - issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies - uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and - hash checks to fail. There are also other situations that will cause OneDrive to - report inconsistent file sizes. To use rclone with such - affected files on Sharepoint, you - may disable these checks with the following command line arguments: +If you see the error above after enabling multi-factor authentication +for your account, you can fix it by refreshing your OAuth refresh token. +To do that, run rclone config, and choose to edit your OneDrive backend. +Then, you don't need to actually make any changes until you reach this +question: Already have a token - refresh?. For this question, answer y +and go through the process to refresh your token, just like the first +time the backend is configured. After this, rclone should work again for +this backend. ---ignore-checksum --ignore-size +Invalid request when making public links +On Sharepoint and OneDrive for Business, rclone link may return an +"Invalid request" error. A possible cause is that the organisation admin +didn't allow public links to be made for the organisation/sharepoint +library. To fix the permissions as an admin, take a look at the docs: 1, +2. - Alternatively, if you have write access to the OneDrive files, it may be possible - to fix this problem for certain files, by attempting the steps below. - Open the web interface for [OneDrive](https://onedrive.live.com) and find the - affected files (which will be in the error messages/log for rclone). Simply click on - each of these files, causing OneDrive to open them on the web. This will cause each - file to be converted in place to a format that is functionally equivalent - but which will no longer trigger the size discrepancy. Once all problematic files - are converted you will no longer need the ignore options above. +Can not access Shared with me files - ### Replacing/deleting existing files on Sharepoint gets "item not found" #### +Shared with me files is not supported by rclone currently, but there is +a workaround: - It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue - that Sharepoint (not OneDrive or OneDrive for Business) may return "item not - found" errors when users try to replace or delete uploaded files; this seems to - mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use - the `--backup-dir ` command line argument so rclone moves the - files to be replaced/deleted into a given backup directory (instead of directly - replacing/deleting them). For example, to instruct rclone to move the files into - the directory `rclone-backup-dir` on backend `mysharepoint`, you may use: +1. Visit https://onedrive.live.com +2. Right click a item in Shared, then click Add shortcut to My files in + the context [make_shortcut] +3. The shortcut will appear in My files, you can access it with rclone, + it behaves like a normal folder/file. [in_my_files] [rclone_mount] ---backup-dir mysharepoint:rclone-backup-dir +Live Photos uploaded from iOS (small video clips in .heic files) +The iOS OneDrive app introduced upload and storage of Live Photos in +2020. The usage and download of these uploaded Live Photos is +unfortunately still work-in-progress and this introduces several issues +when copying, synchronising and mounting – both in rclone and in the +native OneDrive client on Windows. - ### access\_denied (AADSTS65005) #### +The root cause can easily be seen if you locate one of your Live Photos +in the OneDrive web interface. Then download the photo from the web +interface. You will then see that the size of downloaded .heic file is +smaller than the size displayed in the web interface. The downloaded +file is smaller because it only contains a single frame (still photo) +extracted from the Live Photo (movie) stored in OneDrive. -Error: access_denied Code: AADSTS65005 Description: Using application -'rclone' is currently not supported for your organization -[YOUR_ORGANIZATION] because it is in an unmanaged state. An -administrator needs to claim ownership of the company by DNS validation -of [YOUR_ORGANIZATION] before the application rclone can be provisioned. +The different sizes will cause rclone copy/sync to repeatedly recopy +unmodified photos something like this: + DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) + DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK + INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) - This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. +These recopies can be worked around by adding --ignore-size. Please note +that this workaround only syncs the still-picture not the movie clip, +and relies on modification dates being correctly updated on all files in +all situations. - However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint +The different sizes will also cause rclone check to report size errors +something like this: - ### invalid\_grant (AADSTS50076) #### + ERROR : 20230203_123826234_iOS.heic: sizes differ -Error: invalid_grant Code: AADSTS50076 Description: Due to a -configuration change made by your administrator, or because you moved to -a new location, you must use multi-factor authentication to access -'...'. +These check errors can be suppressed by adding --ignore-size. +The different sizes will also cause rclone mount to fail downloading +with an error something like this: - If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. + ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF - ### Invalid request when making public links #### +or like this when using --cache-mode=full: - On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid - request" error. A possible cause is that the organisation admin didn't allow - public links to be made for the organisation/sharepoint library. To fix the - permissions as an admin, take a look at the docs: - [1](https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off), - [2](https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3). + INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: + ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - ### Can not access `Shared` with me files +OpenDrive - Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: +Paths are specified as remote:path - 1. Visit [https://onedrive.live.com](https://onedrive.live.com/) - 2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context - ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png "Screenshot (Shared with me)") - 3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file. - ![in_my_files](https://i.imgur.com/0S8H3li.png "Screenshot (My Files)") - ![rclone_mount](https://i.imgur.com/2Iq66sW.png "Screenshot (rclone mount)") +Paths may be as deep as required, e.g. remote:directory/subdirectory. - ### Live Photos uploaded from iOS (small video clips in .heic files) +Configuration - The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) - of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. - The usage and download of these uploaded Live Photos is unfortunately still work-in-progress - and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows. +Here is an example of how to make a remote called remote. First run: - The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. - Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. - The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive. + rclone config - The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this: +This will guide you through an interactive setup process: - DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) - DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK - INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) - - These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip, - and relies on modification dates being correctly updated on all files in all situations. - - The different sizes will also cause `rclone check` to report size errors something like this: - - ERROR : 20230203_123826234_iOS.heic: sizes differ - - These check errors can be suppressed by adding `--ignore-size`. - - The different sizes will also cause `rclone mount` to fail downloading with an error something like this: - - ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF - - or like this when using `--cache-mode=full`: - - INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - - # OpenDrive - - Paths are specified as `remote:path` - - Paths may be as deep as required, e.g. `remote:directory/subdirectory`. - - ## Configuration - - Here is an example of how to make a remote called `remote`. First run: - - rclone config - - This will guide you through an interactive setup process: - -n) New remote -o) Delete remote -p) Quit config e/n/d/q> n name> remote Type of storage to configure. - Choose a number from below, or type in your own value [snip] XX / - OpenDrive  "opendrive" [snip] Storage> opendrive Username username> + n) New remote + d) Delete remote + q) Quit config + e/n/d/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / OpenDrive + \ "opendrive" + [snip] + Storage> opendrive + Username + username> Password -q) Yes type in my own password -r) Generate random password y/g> y Enter the password: password: - Confirm the password: password: -------------------- [remote] - username = password = *** ENCRYPTED *** -------------------- -s) Yes this is OK -t) Edit this remote -u) Delete this remote y/e/d> y + y) Yes type in my own password + g) Generate random password + y/g> y + Enter the password: + password: + Confirm the password: + password: + -------------------- + [remote] + username = + password = *** ENCRYPTED *** + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y +List directories in top level of your OpenDrive - List directories in top level of your OpenDrive + rclone lsd remote: - rclone lsd remote: +List all the files in your OpenDrive - List all the files in your OpenDrive + rclone ls remote: - rclone ls remote: +To copy a local directory to an OpenDrive directory called backup - To copy a local directory to an OpenDrive directory called backup + rclone copy /home/source remote:backup - rclone copy /home/source remote:backup +Modification times and hashes - ### Modification times and hashes +OpenDrive allows modification times to be set on objects accurate to 1 +second. These will be used to detect whether objects need syncing or +not. - OpenDrive allows modification times to be set on objects accurate to 1 - second. These will be used to detect whether objects need syncing or - not. +The MD5 hash algorithm is supported. - The MD5 hash algorithm is supported. +Restricted filename characters - ### Restricted filename characters + Character Value Replacement + ----------- ------- ------------- + NUL 0x00 ␀ + / 0x2F / + " 0x22 " + * 0x2A * + : 0x3A : + < 0x3C < + > 0x3E > + ? 0x3F ? + \ 0x5C \ + | 0x7C | - | Character | Value | Replacement | - | --------- |:-----:|:-----------:| - | NUL | 0x00 | ␀ | - | / | 0x2F | / | - | " | 0x22 | " | - | * | 0x2A | * | - | : | 0x3A | : | - | < | 0x3C | < | - | > | 0x3E | > | - | ? | 0x3F | ? | - | \ | 0x5C | \ | - | \| | 0x7C | | | +File names can also not begin or end with the following characters. +These only get replaced if they are the first or last character in the +name: - File names can also not begin or end with the following characters. - These only get replaced if they are the first or last character in the name: + Character Value Replacement + ----------- ------- ------------- + SP 0x20 ␠ + HT 0x09 ␉ + LF 0x0A ␊ + VT 0x0B ␋ + CR 0x0D ␍ - | Character | Value | Replacement | - | --------- |:-----:|:-----------:| - | SP | 0x20 | ␠ | - | HT | 0x09 | ␉ | - | LF | 0x0A | ␊ | - | VT | 0x0B | ␋ | - | CR | 0x0D | ␍ | +Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON +strings. +Standard options - Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), - as they can't be used in JSON strings. +Here are the Standard options specific to opendrive (OpenDrive). +--opendrive-username - ### Standard options +Username. - Here are the Standard options specific to opendrive (OpenDrive). +Properties: - #### --opendrive-username +- Config: username +- Env Var: RCLONE_OPENDRIVE_USERNAME +- Type: string +- Required: true - Username. +--opendrive-password - Properties: +Password. - - Config: username - - Env Var: RCLONE_OPENDRIVE_USERNAME - - Type: string - - Required: true +NB Input to this must be obscured - see rclone obscure. - #### --opendrive-password +Properties: - Password. +- Config: password +- Env Var: RCLONE_OPENDRIVE_PASSWORD +- Type: string +- Required: true - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). +Advanced options - Properties: +Here are the Advanced options specific to opendrive (OpenDrive). - - Config: password - - Env Var: RCLONE_OPENDRIVE_PASSWORD - - Type: string - - Required: true +--opendrive-encoding - ### Advanced options +The encoding for the backend. - Here are the Advanced options specific to opendrive (OpenDrive). +See the encoding section in the overview for more info. - #### --opendrive-encoding +Properties: - The encoding for the backend. +- Config: encoding +- Env Var: RCLONE_OPENDRIVE_ENCODING +- Type: Encoding +- Default: + Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +--opendrive-chunk-size - Properties: +Files will be uploaded in chunks this size. - - Config: encoding - - Env Var: RCLONE_OPENDRIVE_ENCODING - - Type: Encoding - - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot +Note that these chunks are buffered in memory so increasing them will +increase memory use. - #### --opendrive-chunk-size +Properties: - Files will be uploaded in chunks this size. +- Config: chunk_size +- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE +- Type: SizeSuffix +- Default: 10Mi - Note that these chunks are buffered in memory so increasing them will - increase memory use. +--opendrive-description - Properties: +Description of the remote - - Config: chunk_size - - Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE - - Type: SizeSuffix - - Default: 10Mi +Properties: +- Config: description +- Env Var: RCLONE_OPENDRIVE_DESCRIPTION +- Type: string +- Required: false +Limitations - ## Limitations +Note that OpenDrive is case insensitive so you can't have a file called +"Hello.doc" and one called "hello.doc". - Note that OpenDrive is case insensitive so you can't have a - file called "Hello.doc" and one called "hello.doc". +There are quite a few characters that can't be in OpenDrive file names. +These can't occur on Windows platforms, but on non-Windows platforms +they are common. Rclone will map these names to and from an identical +looking unicode equivalent. For example if a file has a ? in it will be +mapped to ? instead. - There are quite a few characters that can't be in OpenDrive file - names. These can't occur on Windows platforms, but on non-Windows - platforms they are common. Rclone will map these names to and from an - identical looking unicode equivalent. For example if a file has a `?` - in it will be mapped to `?` instead. +rclone about is not supported by the OpenDrive backend. Backends without +this capability cannot determine free space for an rclone mount or use +policy mfs (most free space) as a member of an rclone union remote. - `rclone about` is not supported by the OpenDrive backend. Backends without - this capability cannot determine free space for an rclone mount or - use policy `mfs` (most free space) as a member of an rclone union - remote. +See List of backends that do not support rclone about and rclone about - See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +Oracle Object Storage - # Oracle Object Storage - - [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) - - [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) - - [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) +- Oracle Object Storage Overview +- Oracle Object Storage FAQ +- Oracle Object Storage Limits - Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in - too, e.g. `remote:bucket/path/to/dir`. +Paths are specified as remote:bucket (or remote: for the lsd command.) +You may put subdirectories in too, e.g. remote:bucket/path/to/dir. - Sample command to transfer local artifacts to remote:bucket in oracle object storage: +Sample command to transfer local artifacts to remote:bucket in oracle +object storage: - `rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv` +rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv - ## Configuration +Configuration - Here is an example of making an oracle object storage configuration. `rclone config` walks you - through it. +Here is an example of making an oracle object storage configuration. +rclone config walks you through it. - Here is an example of how to make a remote called `remote`. First run: +Here is an example of how to make a remote called remote. First run: - rclone config + rclone config - This will guide you through an interactive setup process: +This will guide you through an interactive setup process: -n) New remote -o) Delete remote -p) Rename remote -q) Copy remote -r) Set configuration password -s) Quit config e/n/d/r/c/s/q> n + n) New remote + d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + e/n/d/r/c/s/q> n -Enter name for new remote. name> remote + Enter name for new remote. + name> remote -Option Storage. Type of storage to configure. Choose a number from -below, or type in your own value. [snip] XX / Oracle Cloud -Infrastructure Object Storage  (oracleobjectstorage) Storage> -oracleobjectstorage - -Option provider. Choose your Auth Provider Choose a number from below, -or type in your own string value. Press Enter for the default -(env_auth). 1 / automatically pickup the credentials from runtime(env), -first one to provide auth wins  (env_auth) / use an OCI user and an API -key for authentication. 2 | you’ll need to put in a config file your -tenancy OCID, user OCID, region, the path, fingerprint to an API key. | -https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - (user_principal_auth) / use instance principals to authorize an -instance to make API calls. 3 | each instance has its own identity, and -authenticates using the certificates that are read from instance -metadata. | -https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - (instance_principal_auth) 4 / use resource principals to make API calls - (resource_principal_auth) 5 / no credentials needed, this is typically -for reading public buckets  (no_auth) provider> 2 - -Option namespace. Object storage namespace Enter a value. namespace> -idbamagbg734 - -Option compartment. Object storage compartment OCID Enter a value. -compartment> -ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba - -Option region. Object storage Region Enter a value. region> us-ashburn-1 - -Option endpoint. Endpoint for Object storage API. Leave blank to use the -default endpoint for the region. Enter a value. Press Enter to leave -empty. endpoint> - -Option config_file. Full Path to OCI config file Choose a number from -below, or type in your own string value. Press Enter for the default -(~/.oci/config). 1 / oci configuration file location  (~/.oci/config) -config_file> /etc/oci/dev.conf - -Option config_profile. Profile name inside OCI config file Choose a -number from below, or type in your own string value. Press Enter for the -default (Default). 1 / Use the default profile  (Default) -config_profile> Test - -Edit advanced config? y) Yes n) No (default) y/n> n - -Configuration complete. Options: - type: oracleobjectstorage - -namespace: idbamagbg734 - compartment: -ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba -- region: us-ashburn-1 - provider: user_principal_auth - config_file: -/etc/oci/dev.conf - config_profile: Test Keep this "remote" remote? y) -Yes this is OK (default) e) Edit this remote d) Delete this remote -y/e/d> y - - - See all buckets - - rclone lsd remote: - - Create a new bucket - - rclone mkdir remote:bucket - - List the contents of a bucket - - rclone ls remote:bucket - rclone ls remote:bucket --max-depth 1 - - ## Authentication Providers - - OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication - methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) - These choices can be specified in the rclone config file. - - Rclone supports the following OCI authentication provider. - - User Principal - Instance Principal - Resource Principal - No authentication - - ### User Principal - - Sample rclone config file for Authentication Provider User Principal: - - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = user_principal_auth - config_file = /home/opc/.oci/config - config_profile = Default - - Advantages: - - One can use this method from any server within OCI or on-premises or from other cloud provider. - - Considerations: - - you need to configure user’s privileges / policy to allow access to object storage - - Overhead of managing users and keys. - - If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials. - - ### Instance Principal - - An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. - With this approach no credentials have to be stored and managed. - - Sample rclone configuration file for Authentication Provider Instance Principal: - - [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf - [oos] - type = oracleobjectstorage - namespace = idfn - compartment = ocid1.compartment.oc1..aak7a - region = us-ashburn-1 - provider = instance_principal_auth - - Advantages: - - - With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute - instances or rotate the credentials. - - You don’t need to deal with users and keys. - - Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault, - using kms etc. - - Considerations: - - - You need to configure a dynamic group having this instance as member and add policy to read object storage to that - dynamic group. - - Everyone who has access to this machine can execute the CLI commands. - - It is applicable for oci compute instances only. It cannot be used on external instance or resources. - - ### Resource Principal - - Resource principal auth is very similar to instance principal auth but used for resources that are not - compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). - To use resource principal ensure Rclone process is started with these environment variables set in its process. - - export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 - export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 - export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem - export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token - - Sample rclone configuration file for Authentication Provider Resource Principal: - - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = resource_principal_auth - - ### No authentication - - Public buckets do not require any authentication mechanism to read objects. - Sample rclone configuration file for No authentication: - - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = no_auth - - ### Modification times and hashes - - The modification time is stored as metadata on the object as - `opc-meta-mtime` as floating point since the epoch, accurate to 1 ns. - - If the modification time needs to be updated rclone will attempt to perform a server - side copy to update the modification if the object can be copied in a single part. - In the case the object is larger than 5Gb, the object will be uploaded rather than copied. - - Note that reading this from the object takes an additional `HEAD` request as the metadata - isn't returned in object listings. - - The MD5 hash algorithm is supported. - - ### Multipart uploads - - rclone supports multipart uploads with OOS which means that it can - upload files bigger than 5 GiB. - - Note that files uploaded *both* with multipart upload *and* through - crypt remotes do not have MD5 sums. - - rclone switches from single part uploads to multipart uploads at the - point specified by `--oos-upload-cutoff`. This can be a maximum of 5 GiB - and a minimum of 0 (ie always upload multipart files). - - The chunk sizes used in the multipart upload are specified by - `--oos-chunk-size` and the number of chunks uploaded concurrently is - specified by `--oos-upload-concurrency`. - - Multipart uploads will use `--transfers` * `--oos-upload-concurrency` * - `--oos-chunk-size` extra memory. Single part uploads to not use extra - memory. - - Single part transfers can be faster than multipart transfers or slower - depending on your latency from oos - the more latency, the more likely - single part transfers will be faster. - - Increasing `--oos-upload-concurrency` will increase throughput (8 would - be a sensible value) and increasing `--oos-chunk-size` also increases - throughput (16M would be sensible). Increasing either of these will - use more memory. The default values are high enough to gain most of - the possible performance without using too much memory. - - - ### Standard options - - Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). - - #### --oos-provider + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + [snip] + XX / Oracle Cloud Infrastructure Object Storage + \ (oracleobjectstorage) + Storage> oracleobjectstorage + Option provider. Choose your Auth Provider + Choose a number from below, or type in your own string value. + Press Enter for the default (env_auth). + 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins + \ (env_auth) + / use an OCI user and an API key for authentication. + 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. + | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + \ (user_principal_auth) + / use instance principals to authorize an instance to make API calls. + 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. + | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + \ (instance_principal_auth) + / use workload identity to grant Kubernetes pods policy-driven access to Oracle Cloud + 4 | Infrastructure (OCI) resources using OCI Identity and Access Management (IAM). + | https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm + \ (workload_identity_auth) + 5 / use resource principals to make API calls + \ (resource_principal_auth) + 6 / no credentials needed, this is typically for reading public buckets + \ (no_auth) + provider> 2 - Properties: - - - Config: provider - - Env Var: RCLONE_OOS_PROVIDER - - Type: string - - Default: "env_auth" - - Examples: - - "env_auth" - - automatically pickup the credentials from runtime(env), first one to provide auth wins - - "user_principal_auth" - - use an OCI user and an API key for authentication. - - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. - - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - - "instance_principal_auth" - - use instance principals to authorize an instance to make API calls. - - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - - "resource_principal_auth" - - use resource principals to make API calls - - "no_auth" - - no credentials needed, this is typically for reading public buckets - - #### --oos-namespace - + Option namespace. Object storage namespace + Enter a value. + namespace> idbamagbg734 - Properties: - - - Config: namespace - - Env Var: RCLONE_OOS_NAMESPACE - - Type: string - - Required: true - - #### --oos-compartment - + Option compartment. Object storage compartment OCID + Enter a value. + compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba - Properties: - - - Config: compartment - - Env Var: RCLONE_OOS_COMPARTMENT - - Provider: !no_auth - - Type: string - - Required: true - - #### --oos-region - + Option region. Object storage Region + Enter a value. + region> us-ashburn-1 - Properties: - - - Config: region - - Env Var: RCLONE_OOS_REGION - - Type: string - - Required: true - - #### --oos-endpoint - + Option endpoint. Endpoint for Object storage API. - Leave blank to use the default endpoint for the region. + Enter a value. Press Enter to leave empty. + endpoint> + + Option config_file. + Full Path to OCI config file + Choose a number from below, or type in your own string value. + Press Enter for the default (~/.oci/config). + 1 / oci configuration file location + \ (~/.oci/config) + config_file> /etc/oci/dev.conf + + Option config_profile. + Profile name inside OCI config file + Choose a number from below, or type in your own string value. + Press Enter for the default (Default). + 1 / Use the default profile + \ (Default) + config_profile> Test + + Edit advanced config? + y) Yes + n) No (default) + y/n> n + + Configuration complete. + Options: + - type: oracleobjectstorage + - namespace: idbamagbg734 + - compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba + - region: us-ashburn-1 + - provider: user_principal_auth + - config_file: /etc/oci/dev.conf + - config_profile: Test + Keep this "remote" remote? + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +See all buckets + + rclone lsd remote: + +Create a new bucket + + rclone mkdir remote:bucket + +List the contents of a bucket + + rclone ls remote:bucket + rclone ls remote:bucket --max-depth 1 + +Authentication Providers + +OCI has various authentication methods. To learn more about +authentication methods please refer oci authentication methods These +choices can be specified in the rclone config file. + +Rclone supports the following OCI authentication provider. + + User Principal + Instance Principal + Resource Principal + Workload Identity + No authentication + +User Principal + +Sample rclone config file for Authentication Provider User Principal: + + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = user_principal_auth + config_file = /home/opc/.oci/config + config_profile = Default + +Advantages: - One can use this method from any server within OCI or +on-premises or from other cloud provider. + +Considerations: - you need to configure user’s privileges / policy to +allow access to object storage - Overhead of managing users and keys. - +If the user is deleted, the config file will no longer work and may +cause automation regressions that use the user's credentials. - Properties: +Instance Principal + +An OCI compute instance can be authorized to use rclone by using it's +identity and certificates as an instance principal. With this approach +no credentials have to be stored and managed. + +Sample rclone configuration file for Authentication Provider Instance +Principal: + + [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf + [oos] + type = oracleobjectstorage + namespace = idfn + compartment = ocid1.compartment.oc1..aak7a + region = us-ashburn-1 + provider = instance_principal_auth + +Advantages: - - Config: endpoint - - Env Var: RCLONE_OOS_ENDPOINT - - Type: string - - Required: false +- With instance principals, you don't need to configure user + credentials and transfer/ save it to disk in your compute instances + or rotate the credentials. +- You don’t need to deal with users and keys. +- Greatly helps in automation as you don't have to manage access keys, + user private keys, storing them in vault, using kms etc. - #### --oos-config-file +Considerations: - Path to OCI config file +- You need to configure a dynamic group having this instance as member + and add policy to read object storage to that dynamic group. +- Everyone who has access to this machine can execute the CLI + commands. +- It is applicable for oci compute instances only. It cannot be used + on external instance or resources. - Properties: +Resource Principal - - Config: config_file - - Env Var: RCLONE_OOS_CONFIG_FILE - - Provider: user_principal_auth - - Type: string - - Default: "~/.oci/config" - - Examples: - - "~/.oci/config" - - oci configuration file location +Resource principal auth is very similar to instance principal auth but +used for resources that are not compute instances such as serverless +functions. To use resource principal ensure Rclone process is started +with these environment variables set in its process. - #### --oos-config-profile + export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 + export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 + export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem + export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token - Profile name inside the oci config file +Sample rclone configuration file for Authentication Provider Resource +Principal: - Properties: + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = resource_principal_auth - - Config: config_profile - - Env Var: RCLONE_OOS_CONFIG_PROFILE - - Provider: user_principal_auth - - Type: string - - Default: "Default" - - Examples: - - "Default" - - Use the default profile +Workload Identity + +Workload Identity auth may be used when running Rclone from Kubernetes +pod on a Container Engine for Kubernetes (OKE) cluster. For more details +on configuring Workload Identity, see Granting Workloads Access to OCI +Resources. To use workload identity, ensure Rclone is started with these +environment variables set in its process. + + export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 + export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 + +No authentication + +Public buckets do not require any authentication mechanism to read +objects. Sample rclone configuration file for No authentication: + + [oos] + type = oracleobjectstorage + namespace = id34 + compartment = ocid1.compartment.oc1..aaba + region = us-ashburn-1 + provider = no_auth + +Modification times and hashes + +The modification time is stored as metadata on the object as +opc-meta-mtime as floating point since the epoch, accurate to 1 ns. + +If the modification time needs to be updated rclone will attempt to +perform a server side copy to update the modification if the object can +be copied in a single part. In the case the object is larger than 5Gb, +the object will be uploaded rather than copied. + +Note that reading this from the object takes an additional HEAD request +as the metadata isn't returned in object listings. - ### Advanced options +The MD5 hash algorithm is supported. - Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). +Multipart uploads - #### --oos-storage-tier +rclone supports multipart uploads with OOS which means that it can +upload files bigger than 5 GiB. - The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm +Note that files uploaded both with multipart upload and through crypt +remotes do not have MD5 sums. - Properties: +rclone switches from single part uploads to multipart uploads at the +point specified by --oos-upload-cutoff. This can be a maximum of 5 GiB +and a minimum of 0 (ie always upload multipart files). - - Config: storage_tier - - Env Var: RCLONE_OOS_STORAGE_TIER - - Type: string - - Default: "Standard" - - Examples: - - "Standard" - - Standard storage tier, this is the default tier - - "InfrequentAccess" - - InfrequentAccess storage tier - - "Archive" - - Archive storage tier +The chunk sizes used in the multipart upload are specified by +--oos-chunk-size and the number of chunks uploaded concurrently is +specified by --oos-upload-concurrency. - #### --oos-upload-cutoff +Multipart uploads will use --transfers * --oos-upload-concurrency * +--oos-chunk-size extra memory. Single part uploads to not use extra +memory. - Cutoff for switching to chunked upload. +Single part transfers can be faster than multipart transfers or slower +depending on your latency from oos - the more latency, the more likely +single part transfers will be faster. - Any files larger than this will be uploaded in chunks of chunk_size. - The minimum is 0 and the maximum is 5 GiB. +Increasing --oos-upload-concurrency will increase throughput (8 would be +a sensible value) and increasing --oos-chunk-size also increases +throughput (16M would be sensible). Increasing either of these will use +more memory. The default values are high enough to gain most of the +possible performance without using too much memory. - Properties: +Standard options - - Config: upload_cutoff - - Env Var: RCLONE_OOS_UPLOAD_CUTOFF - - Type: SizeSuffix - - Default: 200Mi +Here are the Standard options specific to oracleobjectstorage (Oracle +Cloud Infrastructure Object Storage). - #### --oos-chunk-size +--oos-provider - Chunk size to use for uploading. +Choose your Auth Provider - When uploading files larger than upload_cutoff or files with unknown - size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will be uploaded - as multipart uploads using this chunk size. +Properties: - Note that "upload_concurrency" chunks of this size are buffered - in memory per transfer. +- Config: provider +- Env Var: RCLONE_OOS_PROVIDER +- Type: string +- Default: "env_auth" +- Examples: + - "env_auth" + - automatically pickup the credentials from runtime(env), + first one to provide auth wins + - "user_principal_auth" + - use an OCI user and an API key for authentication. + - you’ll need to put in a config file your tenancy OCID, user + OCID, region, the path, fingerprint to an API key. + - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + - "instance_principal_auth" + - use instance principals to authorize an instance to make API + calls. + - each instance has its own identity, and authenticates using + the certificates that are read from instance metadata. + - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + - "workload_identity_auth" + - use workload identity to grant OCI Container Engine for + Kubernetes workloads policy-driven access to OCI resources + using OCI Identity and Access Management (IAM). + - https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm + - "resource_principal_auth" + - use resource principals to make API calls + - "no_auth" + - no credentials needed, this is typically for reading public + buckets - If you are transferring large files over high-speed links and you have - enough memory, then increasing this will speed up the transfers. +--oos-namespace - Rclone will automatically increase the chunk size when uploading a - large file of known size to stay below the 10,000 chunks limit. +Object storage namespace - Files of unknown size are uploaded with the configured - chunk_size. Since the default chunk size is 5 MiB and there can be at - most 10,000 chunks, this means that by default the maximum size of - a file you can stream upload is 48 GiB. If you wish to stream upload - larger files then you will need to increase chunk_size. +Properties: - Increasing the chunk size decreases the accuracy of the progress - statistics displayed with "-P" flag. +- Config: namespace +- Env Var: RCLONE_OOS_NAMESPACE +- Type: string +- Required: true +--oos-compartment - Properties: +Object storage compartment OCID - - Config: chunk_size - - Env Var: RCLONE_OOS_CHUNK_SIZE - - Type: SizeSuffix - - Default: 5Mi +Properties: - #### --oos-max-upload-parts +- Config: compartment +- Env Var: RCLONE_OOS_COMPARTMENT +- Provider: !no_auth +- Type: string +- Required: true - Maximum number of parts in a multipart upload. +--oos-region - This option defines the maximum number of multipart chunks to use - when doing a multipart upload. +Object storage Region - OCI has max parts limit of 10,000 chunks. +Properties: - Rclone will automatically increase the chunk size when uploading a - large file of a known size to stay below this number of chunks limit. +- Config: region +- Env Var: RCLONE_OOS_REGION +- Type: string +- Required: true +--oos-endpoint - Properties: +Endpoint for Object storage API. - - Config: max_upload_parts - - Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS - - Type: int - - Default: 10000 +Leave blank to use the default endpoint for the region. - #### --oos-upload-concurrency +Properties: - Concurrency for multipart uploads. +- Config: endpoint +- Env Var: RCLONE_OOS_ENDPOINT +- Type: string +- Required: false - This is the number of chunks of the same file that are uploaded - concurrently. +--oos-config-file - If you are uploading small numbers of large files over high-speed links - and these uploads do not fully utilize your bandwidth, then increasing - this may help to speed up the transfers. +Path to OCI config file - Properties: +Properties: - - Config: upload_concurrency - - Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY - - Type: int - - Default: 10 +- Config: config_file +- Env Var: RCLONE_OOS_CONFIG_FILE +- Provider: user_principal_auth +- Type: string +- Default: "~/.oci/config" +- Examples: + - "~/.oci/config" + - oci configuration file location - #### --oos-copy-cutoff +--oos-config-profile - Cutoff for switching to multipart copy. +Profile name inside the oci config file - Any files larger than this that need to be server-side copied will be - copied in chunks of this size. +Properties: - The minimum is 0 and the maximum is 5 GiB. +- Config: config_profile +- Env Var: RCLONE_OOS_CONFIG_PROFILE +- Provider: user_principal_auth +- Type: string +- Default: "Default" +- Examples: + - "Default" + - Use the default profile - Properties: +Advanced options - - Config: copy_cutoff - - Env Var: RCLONE_OOS_COPY_CUTOFF - - Type: SizeSuffix - - Default: 4.656Gi +Here are the Advanced options specific to oracleobjectstorage (Oracle +Cloud Infrastructure Object Storage). - #### --oos-copy-timeout +--oos-storage-tier - Timeout for copy. +The storage class to use when storing new objects in storage. +https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm - Copy is an asynchronous operation, specify timeout to wait for copy to succeed +Properties: +- Config: storage_tier +- Env Var: RCLONE_OOS_STORAGE_TIER +- Type: string +- Default: "Standard" +- Examples: + - "Standard" + - Standard storage tier, this is the default tier + - "InfrequentAccess" + - InfrequentAccess storage tier + - "Archive" + - Archive storage tier - Properties: +--oos-upload-cutoff - - Config: copy_timeout - - Env Var: RCLONE_OOS_COPY_TIMEOUT - - Type: Duration - - Default: 1m0s +Cutoff for switching to chunked upload. - #### --oos-disable-checksum +Any files larger than this will be uploaded in chunks of chunk_size. The +minimum is 0 and the maximum is 5 GiB. - Don't store MD5 checksum with object metadata. +Properties: - Normally rclone will calculate the MD5 checksum of the input before - uploading it so it can add it to metadata on the object. This is great - for data integrity checking but can cause long delays for large files - to start uploading. +- Config: upload_cutoff +- Env Var: RCLONE_OOS_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200Mi - Properties: +--oos-chunk-size - - Config: disable_checksum - - Env Var: RCLONE_OOS_DISABLE_CHECKSUM - - Type: bool - - Default: false +Chunk size to use for uploading. - #### --oos-encoding +When uploading files larger than upload_cutoff or files with unknown +size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will +be uploaded as multipart uploads using this chunk size. - The encoding for the backend. +Note that "upload_concurrency" chunks of this size are buffered in +memory per transfer. - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +If you are transferring large files over high-speed links and you have +enough memory, then increasing this will speed up the transfers. - Properties: +Rclone will automatically increase the chunk size when uploading a large +file of known size to stay below the 10,000 chunks limit. - - Config: encoding - - Env Var: RCLONE_OOS_ENCODING - - Type: Encoding - - Default: Slash,InvalidUtf8,Dot +Files of unknown size are uploaded with the configured chunk_size. Since +the default chunk size is 5 MiB and there can be at most 10,000 chunks, +this means that by default the maximum size of a file you can stream +upload is 48 GiB. If you wish to stream upload larger files then you +will need to increase chunk_size. - #### --oos-leave-parts-on-error +Increasing the chunk size decreases the accuracy of the progress +statistics displayed with "-P" flag. - If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery. +Properties: - It should be set to true for resuming uploads across different sessions. +- Config: chunk_size +- Env Var: RCLONE_OOS_CHUNK_SIZE +- Type: SizeSuffix +- Default: 5Mi - WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add - additional costs if not cleaned up. +--oos-max-upload-parts +Maximum number of parts in a multipart upload. - Properties: +This option defines the maximum number of multipart chunks to use when +doing a multipart upload. - - Config: leave_parts_on_error - - Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR - - Type: bool - - Default: false +OCI has max parts limit of 10,000 chunks. - #### --oos-attempt-resume-upload +Rclone will automatically increase the chunk size when uploading a large +file of a known size to stay below this number of chunks limit. - If true attempt to resume previously started multipart upload for the object. - This will be helpful to speed up multipart transfers by resuming uploads from past session. +Properties: - WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is - aborted and a new multipart upload is started with the new chunk size. +- Config: max_upload_parts +- Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS +- Type: int +- Default: 10000 - The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully. +--oos-upload-concurrency +Concurrency for multipart uploads. - Properties: +This is the number of chunks of the same file that are uploaded +concurrently. - - Config: attempt_resume_upload - - Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD - - Type: bool - - Default: false +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. - #### --oos-no-check-bucket +Properties: - If set, don't attempt to check the bucket exists or create it. +- Config: upload_concurrency +- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY +- Type: int +- Default: 10 - This can be useful when trying to minimise the number of transactions - rclone does if you know the bucket exists already. +--oos-copy-cutoff - It can also be needed if the user you are using does not have bucket - creation permissions. +Cutoff for switching to multipart copy. +Any files larger than this that need to be server-side copied will be +copied in chunks of this size. - Properties: +The minimum is 0 and the maximum is 5 GiB. - - Config: no_check_bucket - - Env Var: RCLONE_OOS_NO_CHECK_BUCKET - - Type: bool - - Default: false +Properties: - #### --oos-sse-customer-key-file +- Config: copy_cutoff +- Env Var: RCLONE_OOS_COPY_CUTOFF +- Type: SizeSuffix +- Default: 4.656Gi - To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated - with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.' +--oos-copy-timeout - Properties: +Timeout for copy. - - Config: sse_customer_key_file - - Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE - - Type: string - - Required: false - - Examples: - - "" - - None +Copy is an asynchronous operation, specify timeout to wait for copy to +succeed - #### --oos-sse-customer-key +Properties: - To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to - encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is - needed. For more information, see Using Your Own Keys for Server-Side Encryption - (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) +- Config: copy_timeout +- Env Var: RCLONE_OOS_COPY_TIMEOUT +- Type: Duration +- Default: 1m0s - Properties: +--oos-disable-checksum - - Config: sse_customer_key - - Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY - - Type: string - - Required: false - - Examples: - - "" - - None +Don't store MD5 checksum with object metadata. - #### --oos-sse-customer-key-sha256 +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can add it to metadata on the object. This is great +for data integrity checking but can cause long delays for large files to +start uploading. - If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption - key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for - Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). +Properties: - Properties: +- Config: disable_checksum +- Env Var: RCLONE_OOS_DISABLE_CHECKSUM +- Type: bool +- Default: false - - Config: sse_customer_key_sha256 - - Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 - - Type: string - - Required: false - - Examples: - - "" - - None +--oos-encoding - #### --oos-sse-kms-key-id +The encoding for the backend. - if using your own master key in vault, this header specifies the - OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call - the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. - Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. +See the encoding section in the overview for more info. - Properties: +Properties: - - Config: sse_kms_key_id - - Env Var: RCLONE_OOS_SSE_KMS_KEY_ID - - Type: string - - Required: false - - Examples: - - "" - - None +- Config: encoding +- Env Var: RCLONE_OOS_ENCODING +- Type: Encoding +- Default: Slash,InvalidUtf8,Dot - #### --oos-sse-customer-algorithm +--oos-leave-parts-on-error - If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm. - Object Storage supports "AES256" as the encryption algorithm. For more information, see - Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). +If true avoid calling abort upload on a failure, leaving all +successfully uploaded parts for manual recovery. - Properties: +It should be set to true for resuming uploads across different sessions. - - Config: sse_customer_algorithm - - Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM - - Type: string - - Required: false - - Examples: - - "" - - None - - "AES256" - - AES256 +WARNING: Storing parts of an incomplete multipart upload counts towards +space usage on object storage and will add additional costs if not +cleaned up. - ## Backend commands +Properties: - Here are the commands specific to the oracleobjectstorage backend. +- Config: leave_parts_on_error +- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR +- Type: bool +- Default: false - Run them with +--oos-attempt-resume-upload - rclone backend COMMAND remote: +If true attempt to resume previously started multipart upload for the +object. This will be helpful to speed up multipart transfers by resuming +uploads from past session. - The help below will explain what arguments each command takes. +WARNING: If chunk size differs in resumed session from past incomplete +session, then the resumed multipart upload is aborted and a new +multipart upload is started with the new chunk size. - See the [backend](https://rclone.org/commands/rclone_backend/) command for more - info on how to pass options and arguments. +The flag leave_parts_on_error must be true to resume and optimize to +skip parts that were already uploaded successfully. - These can be run on a running backend using the rc command - [backend/command](https://rclone.org/rc/#backend-command). +Properties: - ### rename +- Config: attempt_resume_upload +- Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD +- Type: bool +- Default: false - change the name of an object +--oos-no-check-bucket - rclone backend rename remote: [options] [+] +If set, don't attempt to check the bucket exists or create it. - This command can be used to rename a object. +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. + +It can also be needed if the user you are using does not have bucket +creation permissions. + +Properties: + +- Config: no_check_bucket +- Env Var: RCLONE_OOS_NO_CHECK_BUCKET +- Type: bool +- Default: false + +--oos-sse-customer-key-file + +To use SSE-C, a file containing the base64-encoded string of the AES-256 +encryption key associated with the object. Please note only one of +sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.' + +Properties: + +- Config: sse_customer_key_file +- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE +- Type: string +- Required: false +- Examples: + - "" + - None + +--oos-sse-customer-key + +To use SSE-C, the optional header that specifies the base64-encoded +256-bit encryption key to use to encrypt or decrypt the data. Please +note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id +is needed. For more information, see Using Your Own Keys for Server-Side +Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) + +Properties: + +- Config: sse_customer_key +- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY +- Type: string +- Required: false +- Examples: + - "" + - None + +--oos-sse-customer-key-sha256 + +If using SSE-C, The optional header that specifies the base64-encoded +SHA256 hash of the encryption key. This value is used to check the +integrity of the encryption key. see Using Your Own Keys for Server-Side +Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). + +Properties: + +- Config: sse_customer_key_sha256 +- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 +- Type: string +- Required: false +- Examples: + - "" + - None + +--oos-sse-kms-key-id + +if using your own master key in vault, this header specifies the OCID +(https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) +of a master encryption key used to call the Key Management service to +generate a data encryption key or to encrypt or decrypt a data +encryption key. Please note only one of +sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. + +Properties: + +- Config: sse_kms_key_id +- Env Var: RCLONE_OOS_SSE_KMS_KEY_ID +- Type: string +- Required: false +- Examples: + - "" + - None + +--oos-sse-customer-algorithm + +If using SSE-C, the optional header that specifies "AES256" as the +encryption algorithm. Object Storage supports "AES256" as the encryption +algorithm. For more information, see Using Your Own Keys for Server-Side +Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). + +Properties: + +- Config: sse_customer_algorithm +- Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM +- Type: string +- Required: false +- Examples: + - "" + - None + - "AES256" + - AES256 + +--oos-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_OOS_DESCRIPTION +- Type: string +- Required: false + +Backend commands + +Here are the commands specific to the oracleobjectstorage backend. + +Run them with + + rclone backend COMMAND remote: + +The help below will explain what arguments each command takes. + +See the backend command for more info on how to pass options and +arguments. + +These can be run on a running backend using the rc command +backend/command. + +rename + +change the name of an object + + rclone backend rename remote: [options] [+] + +This command can be used to rename a object. + +Usage Examples: + + rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name + +list-multipart-uploads + +List the unfinished multipart uploads + + rclone backend list-multipart-uploads remote: [options] [+] + +This command lists the unfinished multipart uploads in JSON format. + + rclone backend list-multipart-uploads oos:bucket/path/to/object + +It returns a dictionary of buckets with values as lists of unfinished +multipart uploads. + +You can call it with no bucket in which case it lists all bucket, with a +bucket or with a bucket and path. + + { + "test-bucket": [ + { + "namespace": "test-namespace", + "bucket": "test-bucket", + "object": "600m.bin", + "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", + "timeCreated": "2022-07-29T06:21:16.595Z", + "storageTier": "Standard" + } + ] + +cleanup + +Remove unfinished multipart uploads. + + rclone backend cleanup remote: [options] [+] + +This command removes unfinished multipart uploads of age greater than +max-age which defaults to 24 hours. + +Note that you can use --interactive/-i or --dry-run with this command to +see what it would do. + + rclone backend cleanup oos:bucket/path/to/object + rclone backend cleanup -o max-age=7w oos:bucket/path/to/object + +Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. + +Options: + +- "max-age": Max age of upload to delete + +restore + +Restore objects from Archive to Standard storage + + rclone backend restore remote: [options] [+] + +This command can be used to restore one or more objects from Archive to +Standard storage. Usage Examples: - rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name + rclone backend restore oos:bucket/path/to/directory -o hours=HOURS + rclone backend restore oos:bucket -o hours=HOURS +This flag also obeys the filters. Test first with --interactive/-i or +--dry-run flags - ### list-multipart-uploads + rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72 - List the unfinished multipart uploads +All the objects shown will be marked for restore, then - rclone backend list-multipart-uploads remote: [options] [+] + rclone backend restore --include "*.txt" oos:bucket/path -o hours=72 - This command lists the unfinished multipart uploads in JSON format. - - rclone backend list-multipart-uploads oos:bucket/path/to/object - - It returns a dictionary of buckets with values as lists of unfinished - multipart uploads. - - You can call it with no bucket in which case it lists all bucket, with - a bucket or with a bucket and path. + It returns a list of status dictionaries with Object Name and Status + keys. The Status will be "RESTORED"" if it was successful or an error message + if not. + [ { - "test-bucket": [ - { - "namespace": "test-namespace", - "bucket": "test-bucket", - "object": "600m.bin", - "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", - "timeCreated": "2022-07-29T06:21:16.595Z", - "storageTier": "Standard" - } - ] + "Object": "test.txt" + "Status": "RESTORED", + }, + { + "Object": "test/file4.txt" + "Status": "RESTORED", + } + ] +Options: - ### cleanup +- "hours": The number of hours for which this object will be restored. + Default is 24 hrs. - Remove unfinished multipart uploads. +Tutorials - rclone backend cleanup remote: [options] [+] +Mounting Buckets - This command removes unfinished multipart uploads of age greater than - max-age which defaults to 24 hours. +QingStor - Note that you can use --interactive/-i or --dry-run with this command to see what - it would do. +Paths are specified as remote:bucket (or remote: for the lsd command.) +You may put subdirectories in too, e.g. remote:bucket/path/to/dir. - rclone backend cleanup oos:bucket/path/to/object - rclone backend cleanup -o max-age=7w oos:bucket/path/to/object +Configuration - Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. +Here is an example of making an QingStor configuration. First run + rclone config - Options: - - - "max-age": Max age of upload to delete - - - - ## Tutorials - ### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/) - - # QingStor - - Paths are specified as `remote:bucket` (or `remote:` for the `lsd` - command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. - - ## Configuration - - Here is an example of making an QingStor configuration. First run - - rclone config - - This will guide you through an interactive setup process. - -No remotes found, make a new one? n) New remote r) Rename remote c) Copy -remote s) Set configuration password q) Quit config n/r/c/s/q> n name> -remote Type of storage to configure. Choose a number from below, or type -in your own value [snip] XX / QingStor Object Storage  "qingstor" [snip] -Storage> qingstor Get QingStor credentials from runtime. Only applies if -access_key_id and secret_access_key is blank. Choose a number from -below, or type in your own value 1 / Enter QingStor credentials in the -next step  "false" 2 / Get QingStor credentials from the environment -(env vars or IAM)  "true" env_auth> 1 QingStor Access Key ID - leave -blank for anonymous access or runtime credentials. access_key_id> -access_key QingStor Secret Access Key (password) - leave blank for -anonymous access or runtime credentials. secret_access_key> secret_key -Enter an endpoint URL to connection QingStor API. Leave blank will use -the default value "https://qingstor.com:443" endpoint> Zone connect to. -Default is "pek3a". Choose a number from below, or type in your own -value / The Beijing (China) Three Zone 1 | Needs location constraint -pek3a.  "pek3a" / The Shanghai (China) First Zone 2 | Needs location -constraint sh1a.  "sh1a" zone> 1 Number of connection retry. Leave blank -will use the default value "3". connection_retries> Remote config --------------------- [remote] env_auth = false access_key_id = -access_key secret_access_key = secret_key endpoint = zone = pek3a -connection_retries = -------------------- y) Yes this is OK e) Edit this -remote d) Delete this remote y/e/d> y - - - This remote is called `remote` and can now be used like this - - See all buckets - - rclone lsd remote: - - Make a new bucket - - rclone mkdir remote:bucket - - List the contents of a bucket - - rclone ls remote:bucket - - Sync `/home/local/directory` to the remote bucket, deleting any excess - files in the bucket. - - rclone sync --interactive /home/local/directory remote:bucket - - ### --fast-list - - This remote supports `--fast-list` which allows you to use fewer - transactions in exchange for more memory. See the [rclone - docs](https://rclone.org/docs/#fast-list) for more details. - - ### Multipart uploads - - rclone supports multipart uploads with QingStor which means that it can - upload files bigger than 5 GiB. Note that files uploaded with multipart - upload don't have an MD5SUM. - - Note that incomplete multipart uploads older than 24 hours can be - removed with `rclone cleanup remote:bucket` just for one bucket - `rclone cleanup remote:` for all buckets. QingStor does not ever - remove incomplete multipart uploads so it may be necessary to run this - from time to time. - - ### Buckets and Zone - - With QingStor you can list buckets (`rclone lsd`) using any zone, - but you can only access the content of a bucket from the zone it was - created in. If you attempt to access a bucket from the wrong zone, - you will get an error, `incorrect zone, the bucket is not in 'XXX' - zone`. - - ### Authentication - - There are two ways to supply `rclone` with a set of QingStor - credentials. In order of precedence: - - - Directly in the rclone configuration file (as configured by `rclone config`) - - set `access_key_id` and `secret_access_key` - - Runtime configuration: - - set `env_auth` to `true` in the config file - - Exporting the following environment variables before running `rclone` - - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY` - - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY` - - ### Restricted filename characters - - The control characters 0x00-0x1F and / are replaced as in the [default - restricted characters set](https://rclone.org/overview/#restricted-characters). Note - that 0x7F is not replaced. - - Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), - as they can't be used in JSON strings. - - - ### Standard options - - Here are the Standard options specific to qingstor (QingCloud Object Storage). - - #### --qingstor-env-auth - - Get QingStor credentials from runtime. - - Only applies if access_key_id and secret_access_key is blank. - - Properties: - - - Config: env_auth - - Env Var: RCLONE_QINGSTOR_ENV_AUTH - - Type: bool - - Default: false - - Examples: - - "false" - - Enter QingStor credentials in the next step. - - "true" - - Get QingStor credentials from the environment (env vars or IAM). - - #### --qingstor-access-key-id - - QingStor Access Key ID. - - Leave blank for anonymous access or runtime credentials. - - Properties: - - - Config: access_key_id - - Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID - - Type: string - - Required: false - - #### --qingstor-secret-access-key - - QingStor Secret Access Key (password). - - Leave blank for anonymous access or runtime credentials. - - Properties: - - - Config: secret_access_key - - Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY - - Type: string - - Required: false - - #### --qingstor-endpoint +This will guide you through an interactive setup process. + No remotes found, make a new one? + n) New remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + n/r/c/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / QingStor Object Storage + \ "qingstor" + [snip] + Storage> qingstor + Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own value + 1 / Enter QingStor credentials in the next step + \ "false" + 2 / Get QingStor credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + QingStor Access Key ID - leave blank for anonymous access or runtime credentials. + access_key_id> access_key + QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. + secret_access_key> secret_key Enter an endpoint URL to connection QingStor API. + Leave blank will use the default value "https://qingstor.com:443" + endpoint> + Zone connect to. Default is "pek3a". + Choose a number from below, or type in your own value + / The Beijing (China) Three Zone + 1 | Needs location constraint pek3a. + \ "pek3a" + / The Shanghai (China) First Zone + 2 | Needs location constraint sh1a. + \ "sh1a" + zone> 1 + Number of connection retry. + Leave blank will use the default value "3". + connection_retries> + Remote config + -------------------- + [remote] + env_auth = false + access_key_id = access_key + secret_access_key = secret_key + endpoint = + zone = pek3a + connection_retries = + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y - Leave blank will use the default value "https://qingstor.com:443". +This remote is called remote and can now be used like this - Properties: +See all buckets - - Config: endpoint - - Env Var: RCLONE_QINGSTOR_ENDPOINT - - Type: string - - Required: false + rclone lsd remote: - #### --qingstor-zone +Make a new bucket - Zone to connect to. + rclone mkdir remote:bucket - Default is "pek3a". +List the contents of a bucket - Properties: + rclone ls remote:bucket - - Config: zone - - Env Var: RCLONE_QINGSTOR_ZONE - - Type: string - - Required: false - - Examples: - - "pek3a" - - The Beijing (China) Three Zone. - - Needs location constraint pek3a. - - "sh1a" - - The Shanghai (China) First Zone. - - Needs location constraint sh1a. - - "gd2a" - - The Guangdong (China) Second Zone. - - Needs location constraint gd2a. +Sync /home/local/directory to the remote bucket, deleting any excess +files in the bucket. - ### Advanced options + rclone sync --interactive /home/local/directory remote:bucket - Here are the Advanced options specific to qingstor (QingCloud Object Storage). +--fast-list - #### --qingstor-connection-retries +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. - Number of connection retries. +Multipart uploads - Properties: +rclone supports multipart uploads with QingStor which means that it can +upload files bigger than 5 GiB. Note that files uploaded with multipart +upload don't have an MD5SUM. - - Config: connection_retries - - Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES - - Type: int - - Default: 3 +Note that incomplete multipart uploads older than 24 hours can be +removed with rclone cleanup remote:bucket just for one bucket +rclone cleanup remote: for all buckets. QingStor does not ever remove +incomplete multipart uploads so it may be necessary to run this from +time to time. - #### --qingstor-upload-cutoff +Buckets and Zone - Cutoff for switching to chunked upload. +With QingStor you can list buckets (rclone lsd) using any zone, but you +can only access the content of a bucket from the zone it was created in. +If you attempt to access a bucket from the wrong zone, you will get an +error, incorrect zone, the bucket is not in 'XXX' zone. - Any files larger than this will be uploaded in chunks of chunk_size. - The minimum is 0 and the maximum is 5 GiB. +Authentication - Properties: +There are two ways to supply rclone with a set of QingStor credentials. +In order of precedence: - - Config: upload_cutoff - - Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF - - Type: SizeSuffix - - Default: 200Mi +- Directly in the rclone configuration file (as configured by + rclone config) + - set access_key_id and secret_access_key +- Runtime configuration: + - set env_auth to true in the config file + - Exporting the following environment variables before running + rclone + - Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY + - Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY - #### --qingstor-chunk-size +Restricted filename characters - Chunk size to use for uploading. +The control characters 0x00-0x1F and / are replaced as in the default +restricted characters set. Note that 0x7F is not replaced. - When uploading files larger than upload_cutoff they will be uploaded - as multipart uploads using this chunk size. +Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON +strings. - Note that "--qingstor-upload-concurrency" chunks of this size are buffered - in memory per transfer. +Standard options - If you are transferring large files over high-speed links and you have - enough memory, then increasing this will speed up the transfers. +Here are the Standard options specific to qingstor (QingCloud Object +Storage). - Properties: +--qingstor-env-auth - - Config: chunk_size - - Env Var: RCLONE_QINGSTOR_CHUNK_SIZE - - Type: SizeSuffix - - Default: 4Mi +Get QingStor credentials from runtime. - #### --qingstor-upload-concurrency +Only applies if access_key_id and secret_access_key is blank. - Concurrency for multipart uploads. +Properties: - This is the number of chunks of the same file that are uploaded - concurrently. +- Config: env_auth +- Env Var: RCLONE_QINGSTOR_ENV_AUTH +- Type: bool +- Default: false +- Examples: + - "false" + - Enter QingStor credentials in the next step. + - "true" + - Get QingStor credentials from the environment (env vars or + IAM). - NB if you set this to > 1 then the checksums of multipart uploads - become corrupted (the uploads themselves are not corrupted though). +--qingstor-access-key-id - If you are uploading small numbers of large files over high-speed links - and these uploads do not fully utilize your bandwidth, then increasing - this may help to speed up the transfers. +QingStor Access Key ID. - Properties: +Leave blank for anonymous access or runtime credentials. - - Config: upload_concurrency - - Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY - - Type: int - - Default: 1 +Properties: - #### --qingstor-encoding +- Config: access_key_id +- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID +- Type: string +- Required: false - The encoding for the backend. +--qingstor-secret-access-key - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +QingStor Secret Access Key (password). - Properties: +Leave blank for anonymous access or runtime credentials. - - Config: encoding - - Env Var: RCLONE_QINGSTOR_ENCODING - - Type: Encoding - - Default: Slash,Ctl,InvalidUtf8 +Properties: +- Config: secret_access_key +- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY +- Type: string +- Required: false +--qingstor-endpoint - ## Limitations +Enter an endpoint URL to connection QingStor API. - `rclone about` is not supported by the qingstor backend. Backends without - this capability cannot determine free space for an rclone mount or - use policy `mfs` (most free space) as a member of an rclone union - remote. +Leave blank will use the default value "https://qingstor.com:443". - See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +Properties: - # Quatrix +- Config: endpoint +- Env Var: RCLONE_QINGSTOR_ENDPOINT +- Type: string +- Required: false - Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business). +--qingstor-zone - Paths are specified as `remote:path` +Zone to connect to. - Paths may be as deep as required, e.g., `remote:directory/subdirectory`. +Default is "pek3a". - The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at `https:///profile/api-keys` - or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. +Properties: - See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer +- Config: zone +- Env Var: RCLONE_QINGSTOR_ZONE +- Type: string +- Required: false +- Examples: + - "pek3a" + - The Beijing (China) Three Zone. + - Needs location constraint pek3a. + - "sh1a" + - The Shanghai (China) First Zone. + - Needs location constraint sh1a. + - "gd2a" + - The Guangdong (China) Second Zone. + - Needs location constraint gd2a. - ## Configuration +Advanced options - Here is an example of how to make a remote called `remote`. First run: +Here are the Advanced options specific to qingstor (QingCloud Object +Storage). - rclone config +--qingstor-connection-retries - This will guide you through an interactive setup process: +Number of connection retries. -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> remote Type of storage to -configure. Choose a number from below, or type in your own value [snip] -XX / Quatrix by Maytech  "quatrix" [snip] Storage> quatrix API key for -accessing Quatrix account. api_key> your_api_key Host name of Quatrix -account. host> example.quatrix.it +Properties: - -------------------- - [remote] api_key = - your_api_key host = - example.quatrix.it - -------------------- - y) Yes this is OK e) - Edit this remote d) - Delete this remote - y/e/d> y ``` +- Config: connection_retries +- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES +- Type: int +- Default: 3 - Once configured you - can then use rclone - like this, +--qingstor-upload-cutoff - List directories in - top level of your - Quatrix +Cutoff for switching to chunked upload. - rclone lsd remote: +Any files larger than this will be uploaded in chunks of chunk_size. The +minimum is 0 and the maximum is 5 GiB. - List all the files - in your Quatrix +Properties: - rclone ls remote: +- Config: upload_cutoff +- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 200Mi - To copy a local - directory to an - Quatrix directory - called backup - - rclone copy - /home/source - remote:backup - - ### API key validity - - API Key is created - with no expiration - date. It will be - valid until you - delete or deactivate - it in your account. - After disabling, the - API Key can be - enabled back. If the - API Key was deleted - and a new key was - created, you can - update it in rclone - config. The same - happens if the - hostname was - changed. - - ``` $ rclone config - Current remotes: - - Name Type ==== ==== - remote quatrix - - e) Edit existing - remote n) New remote - d) Delete remote r) - Rename remote c) - Copy remote s) Set - configuration - password q) Quit - config - e/n/d/r/c/s/q> e - Choose a number from - below, or type in an - existing value 1 > - remote remote> - remote - -------------------- - -[remote] type = quatrix host = some_host.quatrix.it api_key = -your_api_key -------------------- Edit remote Option api_key. API key -for accessing Quatrix account Enter a string value. Press Enter for the -default (your_api_key) api_key> Option host. Host name of Quatrix -account Enter a string value. Press Enter for the default -(some_host.quatrix.it). - - -------------------------------------------------- - [remote] type = quatrix host = - some_host.quatrix.it api_key = your_api_key - -------------------------------------------------- - y) Yes this is OK e) Edit this remote d) Delete - this remote y/e/d> y ``` - - ### Modification times and hashes - - Quatrix allows modification times to be set on - objects accurate to 1 microsecond. These will be - used to detect whether objects need syncing or - not. - - Quatrix does not support hashes, so you cannot use - the --checksum flag. - - ### Restricted filename characters - - File names in Quatrix are case sensitive and have - limitations like the maximum length of a filename - is 255, and the minimum length is 1. A file name - cannot be equal to . or .. nor contain / , \ or - non-printable ascii. +--qingstor-chunk-size - ### Transfers +Chunk size to use for uploading. - For files above 50 MiB rclone will use a chunked - transfer. Rclone will upload up to --transfers - chunks at the same time (shared among all - multipart uploads). Chunks are buffered in memory, - and the minimal chunk size is 10_000_000 bytes by - default, and it can be changed in the advanced - configuration, so increasing --transfers will - increase the memory use. The chunk size has a - maximum size limit, which is set to 100_000_000 - bytes by default and can be changed in the - advanced configuration. The size of the uploaded - chunk will dynamically change depending on the - upload speed. The total memory use equals the - number of transfers multiplied by the minimal - chunk size. In case there's free memory allocated - for the upload (which equals the difference of - maximal_summary_chunk_size and minimal_chunk_size - * transfers), the chunk size may increase in case - of high upload speed. As well as it can decrease - in case of upload speed problems. If no free - memory is available, all chunks will equal - minimal_chunk_size. +When uploading files larger than upload_cutoff they will be uploaded as +multipart uploads using this chunk size. - ### Deleting files +Note that "--qingstor-upload-concurrency" chunks of this size are +buffered in memory per transfer. - Files you delete with rclone will end up in Trash - and be stored there for 30 days. Quatrix also - provides an API to permanently delete files and an - API to empty the Trash so that you can remove - files permanently from your account. +If you are transferring large files over high-speed links and you have +enough memory, then increasing this will speed up the transfers. - ### Standard options +Properties: - Here are the Standard options specific to quatrix - (Quatrix by Maytech). +- Config: chunk_size +- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE +- Type: SizeSuffix +- Default: 4Mi - #### --quatrix-api-key +--qingstor-upload-concurrency - API key for accessing Quatrix account +Concurrency for multipart uploads. - Properties: +This is the number of chunks of the same file that are uploaded +concurrently. - - Config: api_key - Env Var: - RCLONE_QUATRIX_API_KEY - Type: string - Required: - true +NB if you set this to > 1 then the checksums of multipart uploads become +corrupted (the uploads themselves are not corrupted though). - #### --quatrix-host +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. - Host name of Quatrix account +Properties: - Properties: +- Config: upload_concurrency +- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY +- Type: int +- Default: 1 - - Config: host - Env Var: RCLONE_QUATRIX_HOST - - Type: string - Required: true +--qingstor-encoding - ### Advanced options +The encoding for the backend. - Here are the Advanced options specific to quatrix - (Quatrix by Maytech). +See the encoding section in the overview for more info. - #### --quatrix-encoding +Properties: - The encoding for the backend. +- Config: encoding +- Env Var: RCLONE_QINGSTOR_ENCODING +- Type: Encoding +- Default: Slash,Ctl,InvalidUtf8 - See the encoding section in the overview for more - info. +--qingstor-description - Properties: +Description of the remote - - Config: encoding - Env Var: - RCLONE_QUATRIX_ENCODING - Type: Encoding - - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +Properties: - #### --quatrix-effective-upload-time +- Config: description +- Env Var: RCLONE_QINGSTOR_DESCRIPTION +- Type: string +- Required: false - Wanted upload time for one chunk +Limitations - Properties: +rclone about is not supported by the qingstor backend. Backends without +this capability cannot determine free space for an rclone mount or use +policy mfs (most free space) as a member of an rclone union remote. - - Config: effective_upload_time - Env Var: - RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME - Type: - string - Default: "4s" +See List of backends that do not support rclone about and rclone about - #### --quatrix-minimal-chunk-size +Quatrix - The minimal size for one chunk +Quatrix by Maytech is Quatrix Secure Compliant File Sharing | Maytech. - Properties: +Paths are specified as remote:path - - Config: minimal_chunk_size - Env Var: - RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE - Type: - SizeSuffix - Default: 9.537Mi - - #### --quatrix-maximal-summary-chunk-size - - The maximal summary for all chunks. It should not - be less than 'transfers'*'minimal_chunk_size' +Paths may be as deep as required, e.g., remote:directory/subdirectory. - Properties: - - - Config: maximal_summary_chunk_size - Env Var: - RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE - Type: - SizeSuffix - Default: 95.367Mi +The initial setup for Quatrix involves getting an API Key from Quatrix. +You can get the API key in the user's profile at +https:///profile/api-keys or with the help of the API - +https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. - #### --quatrix-hard-delete +See complete Swagger documentation for Quatrix - +https://docs.maytech.net/quatrix/quatrix-api/api-explorer - Delete files permanently rather than putting them - into the trash. +Configuration - Properties: +Here is an example of how to make a remote called remote. First run: - - Config: hard_delete - Env Var: - RCLONE_QUATRIX_HARD_DELETE - Type: bool - Default: - false + rclone config - ## Storage usage +This will guide you through an interactive setup process: - The storage usage in Quatrix is restricted to the - account during the purchase. You can restrict any - user with a smaller storage limit. The account - limit is applied if the user has no custom storage - limit. Once you've reached the limit, the upload - of files will fail. This can be fixed by freeing - up the space or increasing the quota. + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / Quatrix by Maytech + \ "quatrix" + [snip] + Storage> quatrix + API key for accessing Quatrix account. + api_key> your_api_key + Host name of Quatrix account. + host> example.quatrix.it - ## Server-side operations - - Quatrix supports server-side operations (copy and - move). In case of conflict, files are overwritten - during server-side operation. + -------------------- + [remote] + api_key = your_api_key + host = example.quatrix.it + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +Once configured you can then use rclone like this, - # Sia +List directories in top level of your Quatrix + + rclone lsd remote: + +List all the files in your Quatrix - Sia (sia.tech) is a decentralized cloud storage - platform based on the blockchain technology. With - rclone you can use it like any other remote - filesystem or mount Sia folders locally. The - technology behind it involves a number of new - concepts such as Siacoins and Wallet, Blockchain - and Consensus, Renting and Hosting, and so on. If - you are new to it, you'd better first familiarize - yourself using their excellent support - documentation. + rclone ls remote: - ## Introduction +To copy a local directory to an Quatrix directory called backup - Before you can use rclone with Sia, you will need - to have a running copy of Sia-UI or siad (the Sia - daemon) locally on your computer or on local - network (e.g. a NAS). Please follow the Get - started guide and install one. + rclone copy /home/source remote:backup - rclone interacts with Sia network by talking to - the Sia daemon via HTTP API which is usually - available on port 9980. By default you will run - the daemon locally on the same computer so it's - safe to leave the API password blank (the API URL - will be http://127.0.0.1:9980 making external - access impossible). +API key validity - However, if you want to access Sia daemon running - on another node, for example due to memory - constraints or because you want to share single - daemon between several rclone and Sia-UI - instances, you'll need to make a few more - provisions: - Ensure you have Sia daemon installed - directly or in a docker container because Sia-UI - does not support this mode natively. - Run it on - externally accessible port, for example provide - --api-addr :9980 and --disable-api-security - arguments on the daemon command line. - Enforce - API password for the siad daemon via environment - variable SIA_API_PASSWORD or text file named - apipassword in the daemon directory. - Set rclone - backend option api_password taking it from above - locations. +API Key is created with no expiration date. It will be valid until you +delete or deactivate it in your account. After disabling, the API Key +can be enabled back. If the API Key was deleted and a new key was +created, you can update it in rclone config. The same happens if the +hostname was changed. - Notes: 1. If your wallet is locked, rclone cannot - unlock it automatically. You should either unlock - it in advance by using Sia-UI or via command line - siac wallet unlock. Alternatively you can make - siad unlock your wallet automatically upon startup - by running it with environment variable - SIA_WALLET_PASSWORD. 2. If siad cannot find the - SIA_API_PASSWORD variable or the apipassword file - in the SIA_DIR directory, it will generate a - random password and store in the text file named - apipassword under YOUR_HOME/.sia/ directory on - Unix or - C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword - on Windows. Remember this when you configure - password in rclone. 3. The only way to use siad - without API password is to run it on localhost - with command line argument --authorize-api=false, - but this is insecure and strongly discouraged. - - ## Configuration - - Here is an example of how to make a sia remote - called mySia. First, run: - - rclone config - - This will guide you through an interactive setup - process: - - ``` No remotes found, make a new one? n) New - remote s) Set configuration password q) Quit - config n/s/q> n name> mySia Type of storage to - configure. Enter a string value. Press Enter for - the default (""). Choose a number from below, or - type in your own value ... 29 / Sia Decentralized - Cloud  "sia" ... Storage> sia Sia daemon API URL, - like http://sia.daemon.host:9980. Note that siad - must run with --disable-api-security to open API - port for other hosts (not recommended). Keep - default if Sia daemon runs on localhost. Enter a - string value. Press Enter for the default - ("http://127.0.0.1:9980"). api_url> - http://127.0.0.1:9980 Sia Daemon API Password. Can - be found in the apipassword file located in - HOME/.sia/ or in the daemon directory. y) Yes type - in my own password g) Generate random password n) - No leave this optional password blank (default) - y/g/n> y Enter the password: password: Confirm the - password: password: Edit advanced config? y) Yes - n) No (default) y/n> n - -------------------------------------------------- - -[mySia] type = sia api_url = http://127.0.0.1:9980 api_password = *** -ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit -this remote d) Delete this remote y/e/d> y - - - Once configured, you can then use `rclone` like this: - - - List directories in top level of your Sia storage - -rclone lsd mySia: - - - - List all the files in your Sia storage - -rclone ls mySia: - - - - Upload a local directory to the Sia directory called _backup_ - -rclone copy /home/source mySia:backup - - - - ### Standard options - - Here are the Standard options specific to sia (Sia Decentralized Cloud). - - #### --sia-api-url + $ rclone config + Current remotes: + Name Type + ==== ==== + remote quatrix + + e) Edit existing remote + n) New remote + d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + e/n/d/r/c/s/q> e + Choose a number from below, or type in an existing value + 1 > remote + remote> remote + -------------------- + [remote] + type = quatrix + host = some_host.quatrix.it + api_key = your_api_key + -------------------- + Edit remote + Option api_key. + API key for accessing Quatrix account + Enter a string value. Press Enter for the default (your_api_key) + api_key> + Option host. + Host name of Quatrix account + Enter a string value. Press Enter for the default (some_host.quatrix.it). + + -------------------- + [remote] + type = quatrix + host = some_host.quatrix.it + api_key = your_api_key + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +Modification times and hashes + +Quatrix allows modification times to be set on objects accurate to 1 +microsecond. These will be used to detect whether objects need syncing +or not. + +Quatrix does not support hashes, so you cannot use the --checksum flag. + +Restricted filename characters + +File names in Quatrix are case sensitive and have limitations like the +maximum length of a filename is 255, and the minimum length is 1. A file +name cannot be equal to . or .. nor contain / , \ or non-printable +ascii. + +Transfers + +For files above 50 MiB rclone will use a chunked transfer. Rclone will +upload up to --transfers chunks at the same time (shared among all +multipart uploads). Chunks are buffered in memory, and the minimal chunk +size is 10_000_000 bytes by default, and it can be changed in the +advanced configuration, so increasing --transfers will increase the +memory use. The chunk size has a maximum size limit, which is set to +100_000_000 bytes by default and can be changed in the advanced +configuration. The size of the uploaded chunk will dynamically change +depending on the upload speed. The total memory use equals the number of +transfers multiplied by the minimal chunk size. In case there's free +memory allocated for the upload (which equals the difference of +maximal_summary_chunk_size and minimal_chunk_size * transfers), the +chunk size may increase in case of high upload speed. As well as it can +decrease in case of upload speed problems. If no free memory is +available, all chunks will equal minimal_chunk_size. + +Deleting files + +Files you delete with rclone will end up in Trash and be stored there +for 30 days. Quatrix also provides an API to permanently delete files +and an API to empty the Trash so that you can remove files permanently +from your account. + +Standard options + +Here are the Standard options specific to quatrix (Quatrix by Maytech). + +--quatrix-api-key + +API key for accessing Quatrix account + +Properties: + +- Config: api_key +- Env Var: RCLONE_QUATRIX_API_KEY +- Type: string +- Required: true + +--quatrix-host + +Host name of Quatrix account + +Properties: + +- Config: host +- Env Var: RCLONE_QUATRIX_HOST +- Type: string +- Required: true + +Advanced options + +Here are the Advanced options specific to quatrix (Quatrix by Maytech). + +--quatrix-encoding + +The encoding for the backend. + +See the encoding section in the overview for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_QUATRIX_ENCODING +- Type: Encoding +- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + +--quatrix-effective-upload-time + +Wanted upload time for one chunk + +Properties: + +- Config: effective_upload_time +- Env Var: RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME +- Type: string +- Default: "4s" + +--quatrix-minimal-chunk-size + +The minimal size for one chunk + +Properties: + +- Config: minimal_chunk_size +- Env Var: RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE +- Type: SizeSuffix +- Default: 9.537Mi + +--quatrix-maximal-summary-chunk-size + +The maximal summary for all chunks. It should not be less than +'transfers'*'minimal_chunk_size' + +Properties: + +- Config: maximal_summary_chunk_size +- Env Var: RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE +- Type: SizeSuffix +- Default: 95.367Mi + +--quatrix-hard-delete + +Delete files permanently rather than putting them into the trash + +Properties: + +- Config: hard_delete +- Env Var: RCLONE_QUATRIX_HARD_DELETE +- Type: bool +- Default: false + +--quatrix-skip-project-folders + +Skip project folders in operations + +Properties: + +- Config: skip_project_folders +- Env Var: RCLONE_QUATRIX_SKIP_PROJECT_FOLDERS +- Type: bool +- Default: false + +--quatrix-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_QUATRIX_DESCRIPTION +- Type: string +- Required: false + +Storage usage + +The storage usage in Quatrix is restricted to the account during the +purchase. You can restrict any user with a smaller storage limit. The +account limit is applied if the user has no custom storage limit. Once +you've reached the limit, the upload of files will fail. This can be +fixed by freeing up the space or increasing the quota. + +Server-side operations + +Quatrix supports server-side operations (copy and move). In case of +conflict, files are overwritten during server-side operation. + +Sia + +Sia (sia.tech) is a decentralized cloud storage platform based on the +blockchain technology. With rclone you can use it like any other remote +filesystem or mount Sia folders locally. The technology behind it +involves a number of new concepts such as Siacoins and Wallet, +Blockchain and Consensus, Renting and Hosting, and so on. If you are new +to it, you'd better first familiarize yourself using their excellent +support documentation. + +Introduction + +Before you can use rclone with Sia, you will need to have a running copy +of Sia-UI or siad (the Sia daemon) locally on your computer or on local +network (e.g. a NAS). Please follow the Get started guide and install +one. + +rclone interacts with Sia network by talking to the Sia daemon via HTTP +API which is usually available on port 9980. By default you will run the +daemon locally on the same computer so it's safe to leave the API +password blank (the API URL will be http://127.0.0.1:9980 making +external access impossible). + +However, if you want to access Sia daemon running on another node, for +example due to memory constraints or because you want to share single +daemon between several rclone and Sia-UI instances, you'll need to make +a few more provisions: - Ensure you have Sia daemon installed directly +or in a docker container because Sia-UI does not support this mode +natively. - Run it on externally accessible port, for example provide +--api-addr :9980 and --disable-api-security arguments on the daemon +command line. - Enforce API password for the siad daemon via environment +variable SIA_API_PASSWORD or text file named apipassword in the daemon +directory. - Set rclone backend option api_password taking it from above +locations. + +Notes: 1. If your wallet is locked, rclone cannot unlock it +automatically. You should either unlock it in advance by using Sia-UI or +via command line siac wallet unlock. Alternatively you can make siad +unlock your wallet automatically upon startup by running it with +environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the +SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR +directory, it will generate a random password and store in the text file +named apipassword under YOUR_HOME/.sia/ directory on Unix or +C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember +this when you configure password in rclone. 3. The only way to use siad +without API password is to run it on localhost with command line +argument --authorize-api=false, but this is insecure and strongly +discouraged. + +Configuration + +Here is an example of how to make a sia remote called mySia. First, run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> mySia + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + ... + 29 / Sia Decentralized Cloud + \ "sia" + ... + Storage> sia Sia daemon API URL, like http://sia.daemon.host:9980. - Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost. - - Properties: - - - Config: api_url - - Env Var: RCLONE_SIA_API_URL - - Type: string - - Default: "http://127.0.0.1:9980" - - #### --sia-api-password - + Enter a string value. Press Enter for the default ("http://127.0.0.1:9980"). + api_url> http://127.0.0.1:9980 Sia Daemon API Password. - Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank (default) + y/g/n> y + Enter the password: + password: + Confirm the password: + password: + Edit advanced config? + y) Yes + n) No (default) + y/n> n + -------------------- + [mySia] + type = sia + api_url = http://127.0.0.1:9980 + api_password = *** ENCRYPTED *** + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). +Once configured, you can then use rclone like this: - Properties: +- List directories in top level of your Sia storage - - Config: api_password - - Env Var: RCLONE_SIA_API_PASSWORD - - Type: string - - Required: false + rclone lsd mySia: - ### Advanced options +- List all the files in your Sia storage - Here are the Advanced options specific to sia (Sia Decentralized Cloud). + rclone ls mySia: - #### --sia-user-agent +- Upload a local directory to the Sia directory called backup - Siad User Agent + rclone copy /home/source mySia:backup - Sia daemon requires the 'Sia-Agent' user agent by default for security +Standard options - Properties: +Here are the Standard options specific to sia (Sia Decentralized Cloud). - - Config: user_agent - - Env Var: RCLONE_SIA_USER_AGENT - - Type: string - - Default: "Sia-Agent" +--sia-api-url - #### --sia-encoding +Sia daemon API URL, like http://sia.daemon.host:9980. - The encoding for the backend. +Note that siad must run with --disable-api-security to open API port for +other hosts (not recommended). Keep default if Sia daemon runs on +localhost. - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +Properties: - Properties: +- Config: api_url +- Env Var: RCLONE_SIA_API_URL +- Type: string +- Default: "http://127.0.0.1:9980" - - Config: encoding - - Env Var: RCLONE_SIA_ENCODING - - Type: Encoding - - Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot +--sia-api-password +Sia Daemon API Password. +Can be found in the apipassword file located in HOME/.sia/ or in the +daemon directory. - ## Limitations +NB Input to this must be obscured - see rclone obscure. - - Modification times not supported - - Checksums not supported - - `rclone about` not supported - - rclone can work only with _Siad_ or _Sia-UI_ at the moment, - the **SkyNet daemon is not supported yet.** - - Sia does not allow control characters or symbols like question and pound - signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding) - them for you, but you'd better be aware +Properties: - # Swift +- Config: api_password +- Env Var: RCLONE_SIA_API_PASSWORD +- Type: string +- Required: false - Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/). - Commercial implementations of that being: +Advanced options - * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) - * [Memset Memstore](https://www.memset.com/cloud/storage/) - * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/) - * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) - * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/) - * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) +Here are the Advanced options specific to sia (Sia Decentralized Cloud). - Paths are specified as `remote:container` (or `remote:` for the `lsd` - command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`. +--sia-user-agent - ## Configuration +Siad User Agent - Here is an example of making a swift configuration. First run +Sia daemon requires the 'Sia-Agent' user agent by default for security - rclone config +Properties: - This will guide you through an interactive setup process. +- Config: user_agent +- Env Var: RCLONE_SIA_USER_AGENT +- Type: string +- Default: "Sia-Agent" -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> remote Type of storage to -configure. Choose a number from below, or type in your own value [snip] -XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset -Memstore, OVH)  "swift" [snip] Storage> swift Get swift credentials from -environment variables in standard OpenStack form. Choose a number from -below, or type in your own value 1 / Enter swift credentials in the next -step  "false" 2 / Get swift credentials from environment vars. Leave -other fields blank if using this.  "true" env_auth> true User name to -log in (OS_USERNAME). user> API key or password (OS_PASSWORD). key> -Authentication URL for server (OS_AUTH_URL). Choose a number from below, -or type in your own value 1 / Rackspace US - "https://auth.api.rackspacecloud.com/v1.0" 2 / Rackspace UK - "https://lon.auth.api.rackspacecloud.com/v1.0" 3 / Rackspace v2 - "https://identity.api.rackspacecloud.com/v2.0" 4 / Memset Memstore UK - "https://auth.storage.memset.com/v1.0" 5 / Memset Memstore UK v2 - "https://auth.storage.memset.com/v2.0" 6 / OVH - "https://auth.cloud.ovh.net/v3" 7 / Blomp Cloud Storage - "https://authenticate.ain.net" auth> User ID to log in - optional - -most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). -user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> -Tenant name - optional for v1 auth, this or tenant_id required otherwise -(OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 -auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant -domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> -Region name - optional (OS_REGION_NAME) region> Storage URL - optional -(OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - -optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to -(1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> -Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) -Choose a number from below, or type in your own value 1 / Public -(default, choose this if not sure)  "public" 2 / Internal (use internal -service net)  "internal" 3 / Admin  "admin" endpoint_type> Remote config --------------------- [test] env_auth = true user = key = auth = user_id -= domain = tenant = tenant_id = tenant_domain = region = storage_url = -auth_token = auth_version = endpoint_type = -------------------- y) Yes -this is OK e) Edit this remote d) Delete this remote y/e/d> y +--sia-encoding +The encoding for the backend. - This remote is called `remote` and can now be used like this +See the encoding section in the overview for more info. - See all containers +Properties: - rclone lsd remote: +- Config: encoding +- Env Var: RCLONE_SIA_ENCODING +- Type: Encoding +- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot - Make a new container +--sia-description - rclone mkdir remote:container +Description of the remote - List the contents of a container +Properties: - rclone ls remote:container +- Config: description +- Env Var: RCLONE_SIA_DESCRIPTION +- Type: string +- Required: false - Sync `/home/local/directory` to the remote container, deleting any - excess files in the container. +Limitations - rclone sync --interactive /home/local/directory remote:container +- Modification times not supported +- Checksums not supported +- rclone about not supported +- rclone can work only with Siad or Sia-UI at the moment, the SkyNet + daemon is not supported yet. +- Sia does not allow control characters or symbols like question and + pound signs in file names. rclone will transparently encode them for + you, but you'd better be aware - ### Configuration from an OpenStack credentials file +Swift - An OpenStack credentials file typically looks something something - like this (without the comments) +Swift refers to OpenStack Object Storage. Commercial implementations of +that being: -export OS_AUTH_URL=https://a.provider.net/v2.0 export -OS_TENANT_ID=ffffffffffffffffffffffffffffffff export -OS_TENANT_NAME="1234567890123456" export OS_USERNAME="123abc567xy" echo -"Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT -export -OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="SBG1" if [ -z "$OS_REGION_NAME" -]; then unset OS_REGION_NAME; fi +- Rackspace Cloud Files +- Memset Memstore +- OVH Object Storage +- Oracle Cloud Storage +- Blomp Cloud Storage +- IBM Bluemix Cloud ObjectStorage Swift +Paths are specified as remote:container (or remote: for the lsd +command.) You may put subdirectories in too, e.g. +remote:container/path/to/dir. - The config file needs to look something like this where `$OS_USERNAME` - represents the value of the `OS_USERNAME` variable - `123abc567xy` in - the example above. +Configuration -[remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = -$OS_AUTH_URL tenant = $OS_TENANT_NAME +Here is an example of making a swift configuration. First run + rclone config - Note that you may (or may not) need to set `region` too - try without first. - - ### Configuration from the environment - - If you prefer you can configure rclone to use swift using a standard - set of OpenStack environment variables. - - When you run through the config, make sure you choose `true` for - `env_auth` and leave everything else blank. - - rclone will then set any empty config parameters from the environment - using standard OpenStack environment variables. There is [a list of - the - variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) - in the docs for the swift library. - - ### Using an alternate authentication method - - If your OpenStack installation uses a non-standard authentication method - that might not be yet supported by rclone or the underlying swift library, - you can authenticate externally (e.g. calling manually the `openstack` - commands to get a token). Then, you just need to pass the two - configuration variables ``auth_token`` and ``storage_url``. - If they are both provided, the other variables are ignored. rclone will - not try to authenticate but instead assume it is already authenticated - and use these two variables to access the OpenStack installation. - - #### Using rclone without a config file - - You can use rclone with swift without a config file, if desired, like - this: - -source openstack-credentials-file export -RCLONE_CONFIG_MYREMOTE_TYPE=swift export -RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote: - - - ### --fast-list - - This remote supports `--fast-list` which allows you to use fewer - transactions in exchange for more memory. See the [rclone - docs](https://rclone.org/docs/#fast-list) for more details. - - ### --update and --use-server-modtime - - As noted below, the modified time is stored on metadata on the object. It is - used by default for all operations that require checking the time a file was - last updated. It allows rclone to treat the remote more like a true filesystem, - but it is inefficient because it requires an extra API call to retrieve the - metadata. - - For many operations, the time the object was last uploaded to the remote is - sufficient to determine if it is "dirty". By using `--update` along with - `--use-server-modtime`, you can avoid the extra API call and simply upload - files whose local modtime is newer than the time it was last uploaded. - - ### Modification times and hashes - - The modified time is stored as metadata on the object as - `X-Object-Meta-Mtime` as floating point since the epoch accurate to 1 - ns. - - This is a de facto standard (used in the official python-swiftclient - amongst others) for storing the modification time for an object. - - The MD5 hash algorithm is supported. - - ### Restricted filename characters - - | Character | Value | Replacement | - | --------- |:-----:|:-----------:| - | NUL | 0x00 | ␀ | - | / | 0x2F | / | - - Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), - as they can't be used in JSON strings. - - - ### Standard options - - Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). - - #### --swift-env-auth +This will guide you through an interactive setup process. + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH) + \ "swift" + [snip] + Storage> swift Get swift credentials from environment variables in standard OpenStack form. - - Properties: - - - Config: env_auth - - Env Var: RCLONE_SWIFT_ENV_AUTH - - Type: bool - - Default: false - - Examples: - - "false" - - Enter swift credentials in the next step. - - "true" - - Get swift credentials from environment vars. - - Leave other fields blank if using this. - - #### --swift-user - + Choose a number from below, or type in your own value + 1 / Enter swift credentials in the next step + \ "false" + 2 / Get swift credentials from environment vars. Leave other fields blank if using this. + \ "true" + env_auth> true User name to log in (OS_USERNAME). - - Properties: - - - Config: user - - Env Var: RCLONE_SWIFT_USER - - Type: string - - Required: false - - #### --swift-key - + user> API key or password (OS_PASSWORD). - - Properties: - - - Config: key - - Env Var: RCLONE_SWIFT_KEY - - Type: string - - Required: false - - #### --swift-auth - + key> Authentication URL for server (OS_AUTH_URL). - - Properties: - - - Config: auth - - Env Var: RCLONE_SWIFT_AUTH - - Type: string - - Required: false - - Examples: - - "https://auth.api.rackspacecloud.com/v1.0" - - Rackspace US - - "https://lon.auth.api.rackspacecloud.com/v1.0" - - Rackspace UK - - "https://identity.api.rackspacecloud.com/v2.0" - - Rackspace v2 - - "https://auth.storage.memset.com/v1.0" - - Memset Memstore UK - - "https://auth.storage.memset.com/v2.0" - - Memset Memstore UK v2 - - "https://auth.cloud.ovh.net/v3" - - OVH - - "https://authenticate.ain.net" - - Blomp Cloud Storage - - #### --swift-user-id - + Choose a number from below, or type in your own value + 1 / Rackspace US + \ "https://auth.api.rackspacecloud.com/v1.0" + 2 / Rackspace UK + \ "https://lon.auth.api.rackspacecloud.com/v1.0" + 3 / Rackspace v2 + \ "https://identity.api.rackspacecloud.com/v2.0" + 4 / Memset Memstore UK + \ "https://auth.storage.memset.com/v1.0" + 5 / Memset Memstore UK v2 + \ "https://auth.storage.memset.com/v2.0" + 6 / OVH + \ "https://auth.cloud.ovh.net/v3" + 7 / Blomp Cloud Storage + \ "https://authenticate.ain.net" + auth> User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - - Properties: - - - Config: user_id - - Env Var: RCLONE_SWIFT_USER_ID - - Type: string - - Required: false - - #### --swift-domain - + user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) + domain> + Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) + tenant> + Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) + tenant_id> + Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) + tenant_domain> + Region name - optional (OS_REGION_NAME) + region> + Storage URL - optional (OS_STORAGE_URL) + storage_url> + Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) + auth_token> + AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) + auth_version> + Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) + Choose a number from below, or type in your own value + 1 / Public (default, choose this if not sure) + \ "public" + 2 / Internal (use internal service net) + \ "internal" + 3 / Admin + \ "admin" + endpoint_type> + Remote config + -------------------- + [test] + env_auth = true + user = + key = + auth = + user_id = + domain = + tenant = + tenant_id = + tenant_domain = + region = + storage_url = + auth_token = + auth_version = + endpoint_type = + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +This remote is called remote and can now be used like this + +See all containers + + rclone lsd remote: + +Make a new container + + rclone mkdir remote:container + +List the contents of a container + + rclone ls remote:container + +Sync /home/local/directory to the remote container, deleting any excess +files in the container. + + rclone sync --interactive /home/local/directory remote:container + +Configuration from an OpenStack credentials file + +An OpenStack credentials file typically looks something something like +this (without the comments) + + export OS_AUTH_URL=https://a.provider.net/v2.0 + export OS_TENANT_ID=ffffffffffffffffffffffffffffffff + export OS_TENANT_NAME="1234567890123456" + export OS_USERNAME="123abc567xy" + echo "Please enter your OpenStack Password: " + read -sr OS_PASSWORD_INPUT + export OS_PASSWORD=$OS_PASSWORD_INPUT + export OS_REGION_NAME="SBG1" + if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi + +The config file needs to look something like this where $OS_USERNAME +represents the value of the OS_USERNAME variable - 123abc567xy in the +example above. + + [remote] + type = swift + user = $OS_USERNAME + key = $OS_PASSWORD + auth = $OS_AUTH_URL + tenant = $OS_TENANT_NAME + +Note that you may (or may not) need to set region too - try without +first. + +Configuration from the environment + +If you prefer you can configure rclone to use swift using a standard set +of OpenStack environment variables. + +When you run through the config, make sure you choose true for env_auth +and leave everything else blank. + +rclone will then set any empty config parameters from the environment +using standard OpenStack environment variables. There is a list of the +variables in the docs for the swift library. + +Using an alternate authentication method - Properties: +If your OpenStack installation uses a non-standard authentication method +that might not be yet supported by rclone or the underlying swift +library, you can authenticate externally (e.g. calling manually the +openstack commands to get a token). Then, you just need to pass the two +configuration variables auth_token and storage_url. If they are both +provided, the other variables are ignored. rclone will not try to +authenticate but instead assume it is already authenticated and use +these two variables to access the OpenStack installation. - - Config: domain - - Env Var: RCLONE_SWIFT_DOMAIN - - Type: string - - Required: false +Using rclone without a config file - #### --swift-tenant +You can use rclone with swift without a config file, if desired, like +this: - Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME). + source openstack-credentials-file + export RCLONE_CONFIG_MYREMOTE_TYPE=swift + export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true + rclone lsd myremote: - Properties: +--fast-list - - Config: tenant - - Env Var: RCLONE_SWIFT_TENANT - - Type: string - - Required: false +This remote supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. - #### --swift-tenant-id +--update and --use-server-modtime - Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID). +As noted below, the modified time is stored on metadata on the object. +It is used by default for all operations that require checking the time +a file was last updated. It allows rclone to treat the remote more like +a true filesystem, but it is inefficient because it requires an extra +API call to retrieve the metadata. - Properties: +For many operations, the time the object was last uploaded to the remote +is sufficient to determine if it is "dirty". By using --update along +with --use-server-modtime, you can avoid the extra API call and simply +upload files whose local modtime is newer than the time it was last +uploaded. - - Config: tenant_id - - Env Var: RCLONE_SWIFT_TENANT_ID - - Type: string - - Required: false +Modification times and hashes - #### --swift-tenant-domain +The modified time is stored as metadata on the object as +X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns. - Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME). +This is a de facto standard (used in the official python-swiftclient +amongst others) for storing the modification time for an object. - Properties: +The MD5 hash algorithm is supported. - - Config: tenant_domain - - Env Var: RCLONE_SWIFT_TENANT_DOMAIN - - Type: string - - Required: false +Restricted filename characters - #### --swift-region + Character Value Replacement + ----------- ------- ------------- + NUL 0x00 ␀ + / 0x2F / - Region name - optional (OS_REGION_NAME). +Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON +strings. - Properties: +Standard options - - Config: region - - Env Var: RCLONE_SWIFT_REGION - - Type: string - - Required: false +Here are the Standard options specific to swift (OpenStack Swift +(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). - #### --swift-storage-url +--swift-env-auth - Storage URL - optional (OS_STORAGE_URL). +Get swift credentials from environment variables in standard OpenStack +form. - Properties: +Properties: - - Config: storage_url - - Env Var: RCLONE_SWIFT_STORAGE_URL - - Type: string - - Required: false +- Config: env_auth +- Env Var: RCLONE_SWIFT_ENV_AUTH +- Type: bool +- Default: false +- Examples: + - "false" + - Enter swift credentials in the next step. + - "true" + - Get swift credentials from environment vars. + - Leave other fields blank if using this. - #### --swift-auth-token +--swift-user - Auth Token from alternate authentication - optional (OS_AUTH_TOKEN). +User name to log in (OS_USERNAME). - Properties: +Properties: - - Config: auth_token - - Env Var: RCLONE_SWIFT_AUTH_TOKEN - - Type: string - - Required: false +- Config: user +- Env Var: RCLONE_SWIFT_USER +- Type: string +- Required: false - #### --swift-application-credential-id +--swift-key - Application Credential ID (OS_APPLICATION_CREDENTIAL_ID). +API key or password (OS_PASSWORD). - Properties: +Properties: - - Config: application_credential_id - - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID - - Type: string - - Required: false +- Config: key +- Env Var: RCLONE_SWIFT_KEY +- Type: string +- Required: false - #### --swift-application-credential-name +--swift-auth - Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME). +Authentication URL for server (OS_AUTH_URL). - Properties: +Properties: - - Config: application_credential_name - - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME - - Type: string - - Required: false +- Config: auth +- Env Var: RCLONE_SWIFT_AUTH +- Type: string +- Required: false +- Examples: + - "https://auth.api.rackspacecloud.com/v1.0" + - Rackspace US + - "https://lon.auth.api.rackspacecloud.com/v1.0" + - Rackspace UK + - "https://identity.api.rackspacecloud.com/v2.0" + - Rackspace v2 + - "https://auth.storage.memset.com/v1.0" + - Memset Memstore UK + - "https://auth.storage.memset.com/v2.0" + - Memset Memstore UK v2 + - "https://auth.cloud.ovh.net/v3" + - OVH + - "https://authenticate.ain.net" + - Blomp Cloud Storage - #### --swift-application-credential-secret +--swift-user-id - Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET). +User ID to log in - optional - most swift systems use user and leave +this blank (v3 auth) (OS_USER_ID). - Properties: +Properties: - - Config: application_credential_secret - - Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET - - Type: string - - Required: false +- Config: user_id +- Env Var: RCLONE_SWIFT_USER_ID +- Type: string +- Required: false - #### --swift-auth-version +--swift-domain - AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION). +User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - Properties: +Properties: - - Config: auth_version - - Env Var: RCLONE_SWIFT_AUTH_VERSION - - Type: int - - Default: 0 +- Config: domain +- Env Var: RCLONE_SWIFT_DOMAIN +- Type: string +- Required: false - #### --swift-endpoint-type +--swift-tenant - Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE). +Tenant name - optional for v1 auth, this or tenant_id required otherwise +(OS_TENANT_NAME or OS_PROJECT_NAME). - Properties: +Properties: - - Config: endpoint_type - - Env Var: RCLONE_SWIFT_ENDPOINT_TYPE - - Type: string - - Default: "public" - - Examples: - - "public" - - Public (default, choose this if not sure) - - "internal" - - Internal (use internal service net) - - "admin" - - Admin +- Config: tenant +- Env Var: RCLONE_SWIFT_TENANT +- Type: string +- Required: false - #### --swift-storage-policy +--swift-tenant-id - The storage policy to use when creating a new container. +Tenant ID - optional for v1 auth, this or tenant required otherwise +(OS_TENANT_ID). - This applies the specified storage policy when creating a new - container. The policy cannot be changed afterwards. The allowed - configuration values and their meaning depend on your Swift storage - provider. +Properties: - Properties: +- Config: tenant_id +- Env Var: RCLONE_SWIFT_TENANT_ID +- Type: string +- Required: false - - Config: storage_policy - - Env Var: RCLONE_SWIFT_STORAGE_POLICY - - Type: string - - Required: false - - Examples: - - "" - - Default - - "pcs" - - OVH Public Cloud Storage - - "pca" - - OVH Public Cloud Archive +--swift-tenant-domain - ### Advanced options +Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME). - Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). +Properties: - #### --swift-leave-parts-on-error +- Config: tenant_domain +- Env Var: RCLONE_SWIFT_TENANT_DOMAIN +- Type: string +- Required: false - If true avoid calling abort upload on a failure. +--swift-region - It should be set to true for resuming uploads across different sessions. +Region name - optional (OS_REGION_NAME). - Properties: +Properties: - - Config: leave_parts_on_error - - Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR - - Type: bool - - Default: false +- Config: region +- Env Var: RCLONE_SWIFT_REGION +- Type: string +- Required: false - #### --swift-chunk-size +--swift-storage-url - Above this size files will be chunked into a _segments container. +Storage URL - optional (OS_STORAGE_URL). - Above this size files will be chunked into a _segments container. The - default for this is 5 GiB which is its maximum value. +Properties: - Properties: +- Config: storage_url +- Env Var: RCLONE_SWIFT_STORAGE_URL +- Type: string +- Required: false - - Config: chunk_size - - Env Var: RCLONE_SWIFT_CHUNK_SIZE - - Type: SizeSuffix - - Default: 5Gi +--swift-auth-token - #### --swift-no-chunk +Auth Token from alternate authentication - optional (OS_AUTH_TOKEN). - Don't chunk files during streaming upload. +Properties: - When doing streaming uploads (e.g. using rcat or mount) setting this - flag will cause the swift backend to not upload chunked files. +- Config: auth_token +- Env Var: RCLONE_SWIFT_AUTH_TOKEN +- Type: string +- Required: false - This will limit the maximum upload size to 5 GiB. However non chunked - files are easier to deal with and have an MD5SUM. +--swift-application-credential-id - Rclone will still chunk files bigger than chunk_size when doing normal - copy operations. +Application Credential ID (OS_APPLICATION_CREDENTIAL_ID). - Properties: +Properties: - - Config: no_chunk - - Env Var: RCLONE_SWIFT_NO_CHUNK - - Type: bool - - Default: false +- Config: application_credential_id +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID +- Type: string +- Required: false - #### --swift-no-large-objects +--swift-application-credential-name - Disable support for static and dynamic large objects +Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME). - Swift cannot transparently store files bigger than 5 GiB. There are - two schemes for doing that, static or dynamic large objects, and the - API does not allow rclone to determine whether a file is a static or - dynamic large object without doing a HEAD on the object. Since these - need to be treated differently, this means rclone has to issue HEAD - requests for objects for example when reading checksums. +Properties: - When `no_large_objects` is set, rclone will assume that there are no - static or dynamic large objects stored. This means it can stop doing - the extra HEAD calls which in turn increases performance greatly - especially when doing a swift to swift transfer with `--checksum` set. +- Config: application_credential_name +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME +- Type: string +- Required: false - Setting this option implies `no_chunk` and also that no files will be - uploaded in chunks, so files bigger than 5 GiB will just fail on - upload. +--swift-application-credential-secret - If you set this option and there *are* static or dynamic large objects, - then this will give incorrect hashes for them. Downloads will succeed, - but other operations such as Remove and Copy will fail. +Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET). +Properties: - Properties: +- Config: application_credential_secret +- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET +- Type: string +- Required: false - - Config: no_large_objects - - Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS - - Type: bool - - Default: false +--swift-auth-version - #### --swift-encoding +AuthVersion - optional - set to (1,2,3) if your auth URL has no version +(ST_AUTH_VERSION). - The encoding for the backend. +Properties: - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +- Config: auth_version +- Env Var: RCLONE_SWIFT_AUTH_VERSION +- Type: int +- Default: 0 - Properties: +--swift-endpoint-type - - Config: encoding - - Env Var: RCLONE_SWIFT_ENCODING - - Type: Encoding - - Default: Slash,InvalidUtf8 +Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE). +Properties: +- Config: endpoint_type +- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE +- Type: string +- Default: "public" +- Examples: + - "public" + - Public (default, choose this if not sure) + - "internal" + - Internal (use internal service net) + - "admin" + - Admin - ## Limitations +--swift-storage-policy - The Swift API doesn't return a correct MD5SUM for segmented files - (Dynamic or Static Large Objects) so rclone won't check or use the - MD5SUM for these. +The storage policy to use when creating a new container. - ## Troubleshooting +This applies the specified storage policy when creating a new container. +The policy cannot be changed afterwards. The allowed configuration +values and their meaning depend on your Swift storage provider. - ### Rclone gives Failed to create file system for "remote:": Bad Request +Properties: - Due to an oddity of the underlying swift library, it gives a "Bad - Request" error rather than a more sensible error when the - authentication fails for Swift. +- Config: storage_policy +- Env Var: RCLONE_SWIFT_STORAGE_POLICY +- Type: string +- Required: false +- Examples: + - "" + - Default + - "pcs" + - OVH Public Cloud Storage + - "pca" + - OVH Public Cloud Archive - So this most likely means your username / password is wrong. You can - investigate further with the `--dump-bodies` flag. +Advanced options - This may also be caused by specifying the region when you shouldn't - have (e.g. OVH). +Here are the Advanced options specific to swift (OpenStack Swift +(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). - ### Rclone gives Failed to create file system: Response didn't have storage url and auth token +--swift-leave-parts-on-error - This is most likely caused by forgetting to specify your tenant when - setting up a swift remote. +If true avoid calling abort upload on a failure. - ## OVH Cloud Archive +It should be set to true for resuming uploads across different sessions. - To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`. +Properties: - ### Uploading Objects +- Config: leave_parts_on_error +- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR +- Type: bool +- Default: false - Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel. +--swift-chunk-size - ### Retrieving Objects +Above this size files will be chunked into a _segments container. - To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: +Above this size files will be chunked into a _segments container. The +default for this is 5 GiB which is its maximum value. - `2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)` +Properties: - Rclone will wait for the time specified then retry the copy. +- Config: chunk_size +- Env Var: RCLONE_SWIFT_CHUNK_SIZE +- Type: SizeSuffix +- Default: 5Gi - # pCloud +--swift-no-chunk - Paths are specified as `remote:path` +Don't chunk files during streaming upload. - Paths may be as deep as required, e.g. `remote:directory/subdirectory`. +When doing streaming uploads (e.g. using rcat or mount) setting this +flag will cause the swift backend to not upload chunked files. - ## Configuration +This will limit the maximum upload size to 5 GiB. However non chunked +files are easier to deal with and have an MD5SUM. - The initial setup for pCloud involves getting a token from pCloud which you - need to do in your browser. `rclone config` walks you through it. +Rclone will still chunk files bigger than chunk_size when doing normal +copy operations. - Here is an example of how to make a remote called `remote`. First run: +Properties: - rclone config +- Config: no_chunk +- Env Var: RCLONE_SWIFT_NO_CHUNK +- Type: bool +- Default: false - This will guide you through an interactive setup process: +--swift-no-large-objects -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> remote Type of storage to -configure. Choose a number from below, or type in your own value [snip] -XX / Pcloud  "pcloud" [snip] Storage> pcloud Pcloud App Client Id - -leave blank normally. client_id> Pcloud App Client Secret - leave blank -normally. client_secret> Remote config Use web browser to automatically -authenticate rclone with remote? * Say Y if the machine running rclone -has a web browser you can use * Say N if running rclone on a (remote) -machine without web browser access If not sure try Y. If Y failed, try -N. y) Yes n) No y/n> y If your browser doesn't open automatically go to -the following link: http://127.0.0.1:53682/auth Log in and authorize -rclone for access Waiting for code... Got code -------------------- -[remote] client_id = client_secret = token = -{"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +Disable support for static and dynamic large objects +Swift cannot transparently store files bigger than 5 GiB. There are two +schemes for doing that, static or dynamic large objects, and the API +does not allow rclone to determine whether a file is a static or dynamic +large object without doing a HEAD on the object. Since these need to be +treated differently, this means rclone has to issue HEAD requests for +objects for example when reading checksums. - See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a - machine with no Internet browser available. +When no_large_objects is set, rclone will assume that there are no +static or dynamic large objects stored. This means it can stop doing the +extra HEAD calls which in turn increases performance greatly especially +when doing a swift to swift transfer with --checksum set. - Note that rclone runs a webserver on your local machine to collect the - token as returned from pCloud. This only runs from the moment it opens - your browser to the moment you get back the verification code. This - is on `http://127.0.0.1:53682/` and this it may require you to unblock - it temporarily if you are running a host firewall. +Setting this option implies no_chunk and also that no files will be +uploaded in chunks, so files bigger than 5 GiB will just fail on upload. - Once configured you can then use `rclone` like this, +If you set this option and there are static or dynamic large objects, +then this will give incorrect hashes for them. Downloads will succeed, +but other operations such as Remove and Copy will fail. - List directories in top level of your pCloud +Properties: - rclone lsd remote: +- Config: no_large_objects +- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS +- Type: bool +- Default: false - List all the files in your pCloud +--swift-encoding - rclone ls remote: +The encoding for the backend. - To copy a local directory to a pCloud directory called backup +See the encoding section in the overview for more info. - rclone copy /home/source remote:backup +Properties: - ### Modification times and hashes +- Config: encoding +- Env Var: RCLONE_SWIFT_ENCODING +- Type: Encoding +- Default: Slash,InvalidUtf8 - pCloud allows modification times to be set on objects accurate to 1 - second. These will be used to detect whether objects need syncing or - not. In order to set a Modification time pCloud requires the object - be re-uploaded. +--swift-description - pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 - hashes in the EU region, so you can use the `--checksum` flag. +Description of the remote - ### Restricted filename characters +Properties: - In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) - the following characters are also replaced: +- Config: description +- Env Var: RCLONE_SWIFT_DESCRIPTION +- Type: string +- Required: false - | Character | Value | Replacement | - | --------- |:-----:|:-----------:| - | \ | 0x5C | \ | +Limitations - Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), - as they can't be used in JSON strings. +The Swift API doesn't return a correct MD5SUM for segmented files +(Dynamic or Static Large Objects) so rclone won't check or use the +MD5SUM for these. - ### Deleting files +Troubleshooting - Deleted files will be moved to the trash. Your subscription level - will determine how long items stay in the trash. `rclone cleanup` can - be used to empty the trash. +Rclone gives Failed to create file system for "remote:": Bad Request - ### Emptying the trash +Due to an oddity of the underlying swift library, it gives a "Bad +Request" error rather than a more sensible error when the authentication +fails for Swift. - Due to an API limitation, the `rclone cleanup` command will only work if you - set your username and password in the advanced options for this backend. - Since we generally want to avoid storing user passwords in the rclone config - file, we advise you to only set this up if you need the `rclone cleanup` command to work. +So this most likely means your username / password is wrong. You can +investigate further with the --dump-bodies flag. - ### Root folder ID +This may also be caused by specifying the region when you shouldn't have +(e.g. OVH). - You can set the `root_folder_id` for rclone. This is the directory - (identified by its `Folder ID`) that rclone considers to be the root - of your pCloud drive. +Rclone gives Failed to create file system: Response didn't have storage url and auth token - Normally you will leave this blank and rclone will determine the - correct root to use itself. +This is most likely caused by forgetting to specify your tenant when +setting up a swift remote. - However you can set this to restrict rclone to a specific folder - hierarchy. +OVH Cloud Archive - In order to do this you will have to find the `Folder ID` of the - directory you wish rclone to display. This will be the `folder` field - of the URL when you open the relevant folder in the pCloud web - interface. +To use rclone with OVH cloud archive, first use rclone config to set up +a swift backend with OVH, choosing pca as the storage_policy. - So if the folder you want rclone to use has a URL which looks like - `https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid` - in the browser, then you use `5xxxxxxxx8` as - the `root_folder_id` in the config. +Uploading Objects +Uploading objects to OVH cloud archive is no different to object +storage, you just simply run the command you like (move, copy or sync) +to upload the objects. Once uploaded the objects will show in a "Frozen" +state within the OVH control panel. - ### Standard options +Retrieving Objects - Here are the Standard options specific to pcloud (Pcloud). +To retrieve objects use rclone copy as normal. If the objects are in a +frozen state then rclone will ask for them all to be unfrozen and it +will wait at the end of the output with a message like the following: - #### --pcloud-client-id +2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s) - OAuth Client Id. +Rclone will wait for the time specified then retry the copy. - Leave blank normally. +pCloud - Properties: +Paths are specified as remote:path - - Config: client_id - - Env Var: RCLONE_PCLOUD_CLIENT_ID - - Type: string - - Required: false +Paths may be as deep as required, e.g. remote:directory/subdirectory. - #### --pcloud-client-secret +Configuration - OAuth Client Secret. +The initial setup for pCloud involves getting a token from pCloud which +you need to do in your browser. rclone config walks you through it. - Leave blank normally. +Here is an example of how to make a remote called remote. First run: - Properties: + rclone config - - Config: client_secret - - Env Var: RCLONE_PCLOUD_CLIENT_SECRET - - Type: string - - Required: false +This will guide you through an interactive setup process: - ### Advanced options + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / Pcloud + \ "pcloud" + [snip] + Storage> pcloud + Pcloud App Client Id - leave blank normally. + client_id> + Pcloud App Client Secret - leave blank normally. + client_secret> + Remote config + Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access + If not sure try Y. If Y failed, try N. + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + client_id = + client_secret = + token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y - Here are the Advanced options specific to pcloud (Pcloud). +See the remote setup docs for how to set it up on a machine with no +Internet browser available. - #### --pcloud-token +Note that rclone runs a webserver on your local machine to collect the +token as returned from pCloud. This only runs from the moment it opens +your browser to the moment you get back the verification code. This is +on http://127.0.0.1:53682/ and this it may require you to unblock it +temporarily if you are running a host firewall. - OAuth Access Token as a JSON blob. +Once configured you can then use rclone like this, - Properties: +List directories in top level of your pCloud - - Config: token - - Env Var: RCLONE_PCLOUD_TOKEN - - Type: string - - Required: false + rclone lsd remote: - #### --pcloud-auth-url +List all the files in your pCloud - Auth server URL. + rclone ls remote: - Leave blank to use the provider defaults. +To copy a local directory to a pCloud directory called backup - Properties: + rclone copy /home/source remote:backup - - Config: auth_url - - Env Var: RCLONE_PCLOUD_AUTH_URL - - Type: string - - Required: false +Modification times and hashes - #### --pcloud-token-url +pCloud allows modification times to be set on objects accurate to 1 +second. These will be used to detect whether objects need syncing or +not. In order to set a Modification time pCloud requires the object be +re-uploaded. - Token server url. +pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and +SHA256 hashes in the EU region, so you can use the --checksum flag. - Leave blank to use the provider defaults. +Restricted filename characters - Properties: +In addition to the default restricted characters set the following +characters are also replaced: - - Config: token_url - - Env Var: RCLONE_PCLOUD_TOKEN_URL - - Type: string - - Required: false + Character Value Replacement + ----------- ------- ------------- + \ 0x5C \ - #### --pcloud-encoding +Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON +strings. - The encoding for the backend. +Deleting files - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +Deleted files will be moved to the trash. Your subscription level will +determine how long items stay in the trash. rclone cleanup can be used +to empty the trash. - Properties: +Emptying the trash - - Config: encoding - - Env Var: RCLONE_PCLOUD_ENCODING - - Type: Encoding - - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +Due to an API limitation, the rclone cleanup command will only work if +you set your username and password in the advanced options for this +backend. Since we generally want to avoid storing user passwords in the +rclone config file, we advise you to only set this up if you need the +rclone cleanup command to work. - #### --pcloud-root-folder-id +Root folder ID - Fill in for rclone to use a non root folder as its starting point. +You can set the root_folder_id for rclone. This is the directory +(identified by its Folder ID) that rclone considers to be the root of +your pCloud drive. - Properties: +Normally you will leave this blank and rclone will determine the correct +root to use itself. - - Config: root_folder_id - - Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID - - Type: string - - Default: "d0" +However you can set this to restrict rclone to a specific folder +hierarchy. - #### --pcloud-hostname +In order to do this you will have to find the Folder ID of the directory +you wish rclone to display. This will be the folder field of the URL +when you open the relevant folder in the pCloud web interface. - Hostname to connect to. +So if the folder you want rclone to use has a URL which looks like +https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid +in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the +config. - This is normally set when rclone initially does the oauth connection, - however you will need to set it by hand if you are using remote config - with rclone authorize. +Standard options +Here are the Standard options specific to pcloud (Pcloud). - Properties: +--pcloud-client-id - - Config: hostname - - Env Var: RCLONE_PCLOUD_HOSTNAME - - Type: string - - Default: "api.pcloud.com" - - Examples: - - "api.pcloud.com" - - Original/US region - - "eapi.pcloud.com" - - EU region +OAuth Client Id. - #### --pcloud-username +Leave blank normally. - Your pcloud username. - - This is only required when you want to use the cleanup command. Due to a bug - in the pcloud API the required API does not support OAuth authentication so - we have to rely on user password authentication for it. +Properties: - Properties: +- Config: client_id +- Env Var: RCLONE_PCLOUD_CLIENT_ID +- Type: string +- Required: false - - Config: username - - Env Var: RCLONE_PCLOUD_USERNAME - - Type: string - - Required: false +--pcloud-client-secret - #### --pcloud-password +OAuth Client Secret. - Your pcloud password. +Leave blank normally. - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). +Properties: - Properties: +- Config: client_secret +- Env Var: RCLONE_PCLOUD_CLIENT_SECRET +- Type: string +- Required: false - - Config: password - - Env Var: RCLONE_PCLOUD_PASSWORD - - Type: string - - Required: false +Advanced options +Here are the Advanced options specific to pcloud (Pcloud). +--pcloud-token - # PikPak +OAuth Access Token as a JSON blob. - PikPak is [a private cloud drive](https://mypikpak.com/). +Properties: - Paths are specified as `remote:path`, and may be as deep as required, e.g. `remote:directory/subdirectory`. +- Config: token +- Env Var: RCLONE_PCLOUD_TOKEN +- Type: string +- Required: false - ## Configuration +--pcloud-auth-url - Here is an example of making a remote for PikPak. +Auth server URL. - First run: +Leave blank to use the provider defaults. - rclone config +Properties: - This will guide you through an interactive setup process: +- Config: auth_url +- Env Var: RCLONE_PCLOUD_AUTH_URL +- Type: string +- Required: false -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n +--pcloud-token-url -Enter name for new remote. name> remote +Token server url. -Option Storage. Type of storage to configure. Choose a number from -below, or type in your own value. XX / PikPak  (pikpak) Storage> XX +Leave blank to use the provider defaults. -Option user. Pikpak username. Enter a value. user> USERNAME +Properties: -Option pass. Pikpak password. Choose an alternative below. y) Yes, type -in my own password g) Generate random password y/g> y Enter the -password: password: Confirm the password: password: +- Config: token_url +- Env Var: RCLONE_PCLOUD_TOKEN_URL +- Type: string +- Required: false -Edit advanced config? y) Yes n) No (default) y/n> +--pcloud-encoding -Configuration complete. Options: - type: pikpak - user: USERNAME - pass: -*** ENCRYPTED *** - token: -{"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"} -Keep this "remote" remote? y) Yes this is OK (default) e) Edit this -remote d) Delete this remote y/e/d> y +The encoding for the backend. +See the encoding section in the overview for more info. - ### Modification times and hashes +Properties: - PikPak keeps modification times on objects, and updates them when uploading objects, - but it does not support changing only the modification time +- Config: encoding +- Env Var: RCLONE_PCLOUD_ENCODING +- Type: Encoding +- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - The MD5 hash algorithm is supported. +--pcloud-root-folder-id +Fill in for rclone to use a non root folder as its starting point. - ### Standard options +Properties: - Here are the Standard options specific to pikpak (PikPak). +- Config: root_folder_id +- Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID +- Type: string +- Default: "d0" - #### --pikpak-user +--pcloud-hostname +Hostname to connect to. + +This is normally set when rclone initially does the oauth connection, +however you will need to set it by hand if you are using remote config +with rclone authorize. + +Properties: + +- Config: hostname +- Env Var: RCLONE_PCLOUD_HOSTNAME +- Type: string +- Default: "api.pcloud.com" +- Examples: + - "api.pcloud.com" + - Original/US region + - "eapi.pcloud.com" + - EU region + +--pcloud-username + +Your pcloud username. + +This is only required when you want to use the cleanup command. Due to a +bug in the pcloud API the required API does not support OAuth +authentication so we have to rely on user password authentication for +it. + +Properties: + +- Config: username +- Env Var: RCLONE_PCLOUD_USERNAME +- Type: string +- Required: false + +--pcloud-password + +Your pcloud password. + +NB Input to this must be obscured - see rclone obscure. + +Properties: + +- Config: password +- Env Var: RCLONE_PCLOUD_PASSWORD +- Type: string +- Required: false + +--pcloud-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PCLOUD_DESCRIPTION +- Type: string +- Required: false + +PikPak + +PikPak is a private cloud drive. + +Paths are specified as remote:path, and may be as deep as required, e.g. +remote:directory/subdirectory. + +Configuration + +Here is an example of making a remote for PikPak. + +First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + + Enter name for new remote. + name> remote + + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + XX / PikPak + \ (pikpak) + Storage> XX + + Option user. Pikpak username. + Enter a value. + user> USERNAME - Properties: - - - Config: user - - Env Var: RCLONE_PIKPAK_USER - - Type: string - - Required: true - - #### --pikpak-pass - + Option pass. Pikpak password. + Choose an alternative below. + y) Yes, type in my own password + g) Generate random password + y/g> y + Enter the password: + password: + Confirm the password: + password: - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + Edit advanced config? + y) Yes + n) No (default) + y/n> - Properties: + Configuration complete. + Options: + - type: pikpak + - user: USERNAME + - pass: *** ENCRYPTED *** + - token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"} + Keep this "remote" remote? + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y - - Config: pass - - Env Var: RCLONE_PIKPAK_PASS - - Type: string - - Required: true +Modification times and hashes - ### Advanced options +PikPak keeps modification times on objects, and updates them when +uploading objects, but it does not support changing only the +modification time - Here are the Advanced options specific to pikpak (PikPak). +The MD5 hash algorithm is supported. - #### --pikpak-client-id +Standard options - OAuth Client Id. +Here are the Standard options specific to pikpak (PikPak). - Leave blank normally. +--pikpak-user - Properties: +Pikpak username. - - Config: client_id - - Env Var: RCLONE_PIKPAK_CLIENT_ID - - Type: string - - Required: false +Properties: - #### --pikpak-client-secret +- Config: user +- Env Var: RCLONE_PIKPAK_USER +- Type: string +- Required: true - OAuth Client Secret. +--pikpak-pass - Leave blank normally. +Pikpak password. - Properties: +NB Input to this must be obscured - see rclone obscure. - - Config: client_secret - - Env Var: RCLONE_PIKPAK_CLIENT_SECRET - - Type: string - - Required: false +Properties: - #### --pikpak-token +- Config: pass +- Env Var: RCLONE_PIKPAK_PASS +- Type: string +- Required: true - OAuth Access Token as a JSON blob. +Advanced options - Properties: +Here are the Advanced options specific to pikpak (PikPak). - - Config: token - - Env Var: RCLONE_PIKPAK_TOKEN - - Type: string - - Required: false +--pikpak-client-id - #### --pikpak-auth-url +OAuth Client Id. - Auth server URL. +Leave blank normally. - Leave blank to use the provider defaults. +Properties: - Properties: +- Config: client_id +- Env Var: RCLONE_PIKPAK_CLIENT_ID +- Type: string +- Required: false - - Config: auth_url - - Env Var: RCLONE_PIKPAK_AUTH_URL - - Type: string - - Required: false +--pikpak-client-secret - #### --pikpak-token-url +OAuth Client Secret. - Token server url. +Leave blank normally. - Leave blank to use the provider defaults. +Properties: - Properties: +- Config: client_secret +- Env Var: RCLONE_PIKPAK_CLIENT_SECRET +- Type: string +- Required: false - - Config: token_url - - Env Var: RCLONE_PIKPAK_TOKEN_URL - - Type: string - - Required: false +--pikpak-token - #### --pikpak-root-folder-id +OAuth Access Token as a JSON blob. - ID of the root folder. - Leave blank normally. +Properties: - Fill in for rclone to use a non root folder as its starting point. +- Config: token +- Env Var: RCLONE_PIKPAK_TOKEN +- Type: string +- Required: false +--pikpak-auth-url - Properties: +Auth server URL. - - Config: root_folder_id - - Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID - - Type: string - - Required: false +Leave blank to use the provider defaults. - #### --pikpak-use-trash +Properties: - Send files to the trash instead of deleting permanently. +- Config: auth_url +- Env Var: RCLONE_PIKPAK_AUTH_URL +- Type: string +- Required: false - Defaults to true, namely sending files to the trash. - Use `--pikpak-use-trash=false` to delete files permanently instead. +--pikpak-token-url - Properties: +Token server url. - - Config: use_trash - - Env Var: RCLONE_PIKPAK_USE_TRASH - - Type: bool - - Default: true +Leave blank to use the provider defaults. - #### --pikpak-trashed-only +Properties: - Only show files that are in the trash. +- Config: token_url +- Env Var: RCLONE_PIKPAK_TOKEN_URL +- Type: string +- Required: false - This will show trashed files in their original directory structure. +--pikpak-root-folder-id - Properties: +ID of the root folder. Leave blank normally. - - Config: trashed_only - - Env Var: RCLONE_PIKPAK_TRASHED_ONLY - - Type: bool - - Default: false +Fill in for rclone to use a non root folder as its starting point. - #### --pikpak-hash-memory-limit +Properties: - Files bigger than this will be cached on disk to calculate hash if required. +- Config: root_folder_id +- Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID +- Type: string +- Required: false - Properties: +--pikpak-use-trash - - Config: hash_memory_limit - - Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT - - Type: SizeSuffix - - Default: 10Mi +Send files to the trash instead of deleting permanently. - #### --pikpak-encoding +Defaults to true, namely sending files to the trash. Use +--pikpak-use-trash=false to delete files permanently instead. - The encoding for the backend. +Properties: - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +- Config: use_trash +- Env Var: RCLONE_PIKPAK_USE_TRASH +- Type: bool +- Default: true - Properties: +--pikpak-trashed-only - - Config: encoding - - Env Var: RCLONE_PIKPAK_ENCODING - - Type: Encoding - - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot +Only show files that are in the trash. - ## Backend commands +This will show trashed files in their original directory structure. - Here are the commands specific to the pikpak backend. +Properties: - Run them with +- Config: trashed_only +- Env Var: RCLONE_PIKPAK_TRASHED_ONLY +- Type: bool +- Default: false - rclone backend COMMAND remote: +--pikpak-hash-memory-limit - The help below will explain what arguments each command takes. +Files bigger than this will be cached on disk to calculate hash if +required. - See the [backend](https://rclone.org/commands/rclone_backend/) command for more - info on how to pass options and arguments. +Properties: - These can be run on a running backend using the rc command - [backend/command](https://rclone.org/rc/#backend-command). +- Config: hash_memory_limit +- Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT +- Type: SizeSuffix +- Default: 10Mi - ### addurl +--pikpak-encoding - Add offline download task for url +The encoding for the backend. - rclone backend addurl remote: [options] [+] +See the encoding section in the overview for more info. - This command adds offline download task for url. +Properties: - Usage: +- Config: encoding +- Env Var: RCLONE_PIKPAK_ENCODING +- Type: Encoding +- Default: + Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot - rclone backend addurl pikpak:dirpath url +--pikpak-description - Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, - download will fallback to default 'My Pack' folder. +Description of the remote +Properties: - ### decompress +- Config: description +- Env Var: RCLONE_PIKPAK_DESCRIPTION +- Type: string +- Required: false - Request decompress of a file/files in a folder +Backend commands - rclone backend decompress remote: [options] [+] +Here are the commands specific to the pikpak backend. - This command requests decompress of file/files in a folder. +Run them with - Usage: + rclone backend COMMAND remote: - rclone backend decompress pikpak:dirpath {filename} -o password=password - rclone backend decompress pikpak:dirpath {filename} -o delete-src-file +The help below will explain what arguments each command takes. - An optional argument 'filename' can be specified for a file located in - 'pikpak:dirpath'. You may want to pass '-o password=password' for a - password-protected files. Also, pass '-o delete-src-file' to delete - source files after decompression finished. +See the backend command for more info on how to pass options and +arguments. - Result: +These can be run on a running backend using the rc command +backend/command. - { - "Decompressed": 17, - "SourceDeleted": 0, - "Errors": 0 - } +addurl +Add offline download task for url + rclone backend addurl remote: [options] [+] +This command adds offline download task for url. - ## Limitations +Usage: - ### Hashes may be empty + rclone backend addurl pikpak:dirpath url - PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files. +Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download +will fallback to default 'My Pack' folder. - ### Deleted files still visible with trashed-only +decompress - Deleted files will still be visible with `--pikpak-trashed-only` even after the - trash emptied. This goes away after few days. +Request decompress of a file/files in a folder - # premiumize.me + rclone backend decompress remote: [options] [+] - Paths are specified as `remote:path` +This command requests decompress of file/files in a folder. - Paths may be as deep as required, e.g. `remote:directory/subdirectory`. +Usage: - ## Configuration + rclone backend decompress pikpak:dirpath {filename} -o password=password + rclone backend decompress pikpak:dirpath {filename} -o delete-src-file - The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you - need to do in your browser. `rclone config` walks you through it. +An optional argument 'filename' can be specified for a file located in +'pikpak:dirpath'. You may want to pass '-o password=password' for a +password-protected files. Also, pass '-o delete-src-file' to delete +source files after decompression finished. - Here is an example of how to make a remote called `remote`. First run: +Result: - rclone config + { + "Decompressed": 17, + "SourceDeleted": 0, + "Errors": 0 + } - This will guide you through an interactive setup process: +Limitations -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> remote Type of storage to -configure. Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value [snip] XX / -premiumize.me  "premiumizeme" [snip] Storage> premiumizeme ** See help -for premiumizeme backend at: https://rclone.org/premiumizeme/ ** +Hashes may be empty -Remote config Use web browser to automatically authenticate rclone with -remote? * Say Y if the machine running rclone has a web browser you can -use * Say N if running rclone on a (remote) machine without web browser -access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If -your browser doesn't open automatically go to the following link: -http://127.0.0.1:53682/auth Log in and authorize rclone for access -Waiting for code... Got code -------------------- [remote] type = -premiumizeme token = -{"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> +PikPak supports MD5 hash, but sometimes given empty especially for +user-uploaded files. +Deleted files still visible with trashed-only - See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a - machine with no Internet browser available. +Deleted files will still be visible with --pikpak-trashed-only even +after the trash emptied. This goes away after few days. - Note that rclone runs a webserver on your local machine to collect the - token as returned from premiumize.me. This only runs from the moment it opens - your browser to the moment you get back the verification code. This - is on `http://127.0.0.1:53682/` and this it may require you to unblock - it temporarily if you are running a host firewall. +premiumize.me - Once configured you can then use `rclone` like this, +Paths are specified as remote:path - List directories in top level of your premiumize.me +Paths may be as deep as required, e.g. remote:directory/subdirectory. - rclone lsd remote: +Configuration - List all the files in your premiumize.me +The initial setup for premiumize.me involves getting a token from +premiumize.me which you need to do in your browser. rclone config walks +you through it. - rclone ls remote: +Here is an example of how to make a remote called remote. First run: - To copy a local directory to an premiumize.me directory called backup + rclone config - rclone copy /home/source remote:backup +This will guide you through an interactive setup process: - ### Modification times and hashes + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + [snip] + XX / premiumize.me + \ "premiumizeme" + [snip] + Storage> premiumizeme + ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** - premiumize.me does not support modification times or hashes, therefore - syncing will default to `--size-only` checking. Note that using - `--update` will work. + Remote config + Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access + If not sure try Y. If Y failed, try N. + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [remote] + type = premiumizeme + token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> - ### Restricted filename characters +See the remote setup docs for how to set it up on a machine with no +Internet browser available. - In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) - the following characters are also replaced: +Note that rclone runs a webserver on your local machine to collect the +token as returned from premiumize.me. This only runs from the moment it +opens your browser to the moment you get back the verification code. +This is on http://127.0.0.1:53682/ and this it may require you to +unblock it temporarily if you are running a host firewall. - | Character | Value | Replacement | - | --------- |:-----:|:-----------:| - | \ | 0x5C | \ | - | " | 0x22 | " | +Once configured you can then use rclone like this, - Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), - as they can't be used in JSON strings. +List directories in top level of your premiumize.me + rclone lsd remote: - ### Standard options +List all the files in your premiumize.me - Here are the Standard options specific to premiumizeme (premiumize.me). + rclone ls remote: - #### --premiumizeme-client-id +To copy a local directory to an premiumize.me directory called backup - OAuth Client Id. + rclone copy /home/source remote:backup - Leave blank normally. +Modification times and hashes - Properties: +premiumize.me does not support modification times or hashes, therefore +syncing will default to --size-only checking. Note that using --update +will work. - - Config: client_id - - Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID - - Type: string - - Required: false +Restricted filename characters - #### --premiumizeme-client-secret +In addition to the default restricted characters set the following +characters are also replaced: - OAuth Client Secret. + Character Value Replacement + ----------- ------- ------------- + \ 0x5C \ + " 0x22 " - Leave blank normally. +Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON +strings. - Properties: +Standard options - - Config: client_secret - - Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET - - Type: string - - Required: false +Here are the Standard options specific to premiumizeme (premiumize.me). - #### --premiumizeme-api-key +--premiumizeme-client-id - API Key. +OAuth Client Id. - This is not normally used - use oauth instead. +Leave blank normally. +Properties: - Properties: +- Config: client_id +- Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID +- Type: string +- Required: false - - Config: api_key - - Env Var: RCLONE_PREMIUMIZEME_API_KEY - - Type: string - - Required: false +--premiumizeme-client-secret - ### Advanced options +OAuth Client Secret. - Here are the Advanced options specific to premiumizeme (premiumize.me). +Leave blank normally. - #### --premiumizeme-token +Properties: - OAuth Access Token as a JSON blob. +- Config: client_secret +- Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET +- Type: string +- Required: false - Properties: +--premiumizeme-api-key - - Config: token - - Env Var: RCLONE_PREMIUMIZEME_TOKEN - - Type: string - - Required: false +API Key. - #### --premiumizeme-auth-url +This is not normally used - use oauth instead. - Auth server URL. +Properties: - Leave blank to use the provider defaults. +- Config: api_key +- Env Var: RCLONE_PREMIUMIZEME_API_KEY +- Type: string +- Required: false - Properties: +Advanced options - - Config: auth_url - - Env Var: RCLONE_PREMIUMIZEME_AUTH_URL - - Type: string - - Required: false +Here are the Advanced options specific to premiumizeme (premiumize.me). - #### --premiumizeme-token-url +--premiumizeme-token - Token server url. +OAuth Access Token as a JSON blob. - Leave blank to use the provider defaults. +Properties: - Properties: +- Config: token +- Env Var: RCLONE_PREMIUMIZEME_TOKEN +- Type: string +- Required: false - - Config: token_url - - Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL - - Type: string - - Required: false +--premiumizeme-auth-url - #### --premiumizeme-encoding +Auth server URL. - The encoding for the backend. +Leave blank to use the provider defaults. - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +Properties: - Properties: +- Config: auth_url +- Env Var: RCLONE_PREMIUMIZEME_AUTH_URL +- Type: string +- Required: false - - Config: encoding - - Env Var: RCLONE_PREMIUMIZEME_ENCODING - - Type: Encoding - - Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot +--premiumizeme-token-url +Token server url. +Leave blank to use the provider defaults. - ## Limitations +Properties: - Note that premiumize.me is case insensitive so you can't have a file called - "Hello.doc" and one called "hello.doc". +- Config: token_url +- Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL +- Type: string +- Required: false - premiumize.me file names can't have the `\` or `"` characters in. - rclone maps these to and from an identical looking unicode equivalents - `\` and `"` +--premiumizeme-encoding - premiumize.me only supports filenames up to 255 characters in length. +The encoding for the backend. - # Proton Drive +See the encoding section in the overview for more info. - [Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault - for your files that protects your data. +Properties: - This is an rclone backend for Proton Drive which supports the file transfer - features of Proton Drive using the same client-side encryption. +- Config: encoding +- Env Var: RCLONE_PREMIUMIZEME_ENCODING +- Type: Encoding +- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot - Due to the fact that Proton Drive doesn't publish its API documentation, this - backend is implemented with best efforts by reading the open-sourced client - source code and observing the Proton Drive traffic in the browser. +--premiumizeme-description - **NB** This backend is currently in Beta. It is believed to be correct - and all the integration tests pass. However the Proton Drive protocol - has evolved over time there may be accounts it is not compatible - with. Please [post on the rclone forum](https://forum.rclone.org/) if - you find an incompatibility. +Description of the remote - Paths are specified as `remote:path` +Properties: - Paths may be as deep as required, e.g. `remote:directory/subdirectory`. +- Config: description +- Env Var: RCLONE_PREMIUMIZEME_DESCRIPTION +- Type: string +- Required: false - ## Configurations +Limitations - Here is an example of how to make a remote called `remote`. First run: +Note that premiumize.me is case insensitive so you can't have a file +called "Hello.doc" and one called "hello.doc". - rclone config +premiumize.me file names can't have the \ or " characters in. rclone +maps these to and from an identical looking unicode equivalents \ and +" - This will guide you through an interactive setup process: +premiumize.me only supports filenames up to 255 characters in length. -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> remote Type of storage to -configure. Choose a number from below, or type in your own value [snip] -XX / Proton Drive  "Proton Drive" [snip] Storage> protondrive User name -user> you@protonmail.com Password. y) Yes type in my own password g) -Generate random password n) No leave this optional password blank y/g/n> -y Enter the password: password: Confirm the password: password: Option -2fa. 2FA code (if the account requires one) Enter a value. Press Enter -to leave empty. 2fa> 123456 Remote config -------------------- [remote] -type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +Proton Drive +Proton Drive is an end-to-end encrypted Swiss vault for your files that +protects your data. - **NOTE:** The Proton Drive encryption keys need to have been already generated - after a regular login via the browser, otherwise attempting to use the - credentials in `rclone` will fail. +This is an rclone backend for Proton Drive which supports the file +transfer features of Proton Drive using the same client-side encryption. - Once configured you can then use `rclone` like this, +Due to the fact that Proton Drive doesn't publish its API documentation, +this backend is implemented with best efforts by reading the +open-sourced client source code and observing the Proton Drive traffic +in the browser. - List directories in top level of your Proton Drive +NB This backend is currently in Beta. It is believed to be correct and +all the integration tests pass. However the Proton Drive protocol has +evolved over time there may be accounts it is not compatible with. +Please post on the rclone forum if you find an incompatibility. - rclone lsd remote: +Paths are specified as remote:path - List all the files in your Proton Drive +Paths may be as deep as required, e.g. remote:directory/subdirectory. - rclone ls remote: +Configurations - To copy a local directory to an Proton Drive directory called backup +Here is an example of how to make a remote called remote. First run: - rclone copy /home/source remote:backup + rclone config - ### Modification times and hashes - - Proton Drive Bridge does not support updating modification times yet. - - The SHA1 hash algorithm is supported. - - ### Restricted filename characters - - Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and - right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) - - ### Duplicated files - - Proton Drive can not have two files with exactly the same name and path. If the - conflict occurs, depending on the advanced config, the file might or might not - be overwritten. - - ### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) - - Please set your mailbox password in the advanced config section. - - ### Caching - - The cache is currently built for the case when the rclone is the only instance - performing operations to the mount point. The event system, which is the proton - API system that provides visibility of what has changed on the drive, is yet - to be implemented, so updates from other clients won’t be reflected in the - cache. Thus, if there are concurrent clients accessing the same mount point, - then we might have a problem with caching the stale data. - - - ### Standard options - - Here are the Standard options specific to protondrive (Proton Drive). - - #### --protondrive-username - - The username of your proton account - - Properties: - - - Config: username - - Env Var: RCLONE_PROTONDRIVE_USERNAME - - Type: string - - Required: true - - #### --protondrive-password - - The password of your proton account. - - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - - Properties: - - - Config: password - - Env Var: RCLONE_PROTONDRIVE_PASSWORD - - Type: string - - Required: true - - #### --protondrive-2fa - - The 2FA code - - The value can also be provided with --protondrive-2fa=000000 - - The 2FA code of your proton drive account if the account is set up with - two-factor authentication - - Properties: - - - Config: 2fa - - Env Var: RCLONE_PROTONDRIVE_2FA - - Type: string - - Required: false - - ### Advanced options - - Here are the Advanced options specific to protondrive (Proton Drive). - - #### --protondrive-mailbox-password - - The mailbox password of your two-password proton account. - - For more information regarding the mailbox password, please check the - following official knowledge base article: - https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password - - - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - - Properties: - - - Config: mailbox_password - - Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD - - Type: string - - Required: false - - #### --protondrive-client-uid - - Client uid key (internal use only) - - Properties: - - - Config: client_uid - - Env Var: RCLONE_PROTONDRIVE_CLIENT_UID - - Type: string - - Required: false - - #### --protondrive-client-access-token - - Client access token key (internal use only) - - Properties: - - - Config: client_access_token - - Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN - - Type: string - - Required: false - - #### --protondrive-client-refresh-token - - Client refresh token key (internal use only) - - Properties: - - - Config: client_refresh_token - - Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN - - Type: string - - Required: false - - #### --protondrive-client-salted-key-pass - - Client salted key pass key (internal use only) - - Properties: - - - Config: client_salted_key_pass - - Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS - - Type: string - - Required: false - - #### --protondrive-encoding - - The encoding for the backend. - - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - - Properties: - - - Config: encoding - - Env Var: RCLONE_PROTONDRIVE_ENCODING - - Type: Encoding - - Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - - #### --protondrive-original-file-size - - Return the file size before encryption - - The size of the encrypted file will be different from (bigger than) the - original file size. Unless there is a reason to return the file size - after encryption is performed, otherwise, set this option to true, as - features like Open() which will need to be supplied with original content - size, will fail to operate properly - - Properties: - - - Config: original_file_size - - Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE - - Type: bool - - Default: true - - #### --protondrive-app-version - - The app version string - - The app version string indicates the client that is currently performing - the API request. This information is required and will be sent with every - API request. - - Properties: - - - Config: app_version - - Env Var: RCLONE_PROTONDRIVE_APP_VERSION - - Type: string - - Default: "macos-drive@1.0.0-alpha.1+rclone" - - #### --protondrive-replace-existing-draft - - Create a new revision when filename conflict is detected - - When a file upload is cancelled or failed before completion, a draft will be - created and the subsequent upload of the same file to the same location will be - reported as a conflict. - - The value can also be set by --protondrive-replace-existing-draft=true - - If the option is set to true, the draft will be replaced and then the upload - operation will restart. If there are other clients also uploading at the same - file location at the same time, the behavior is currently unknown. Need to set - to true for integration tests. - If the option is set to false, an error "a draft exist - usually this means a - file is being uploaded at another client, or, there was a failed upload attempt" - will be returned, and no upload will happen. - - Properties: - - - Config: replace_existing_draft - - Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT - - Type: bool - - Default: false - - #### --protondrive-enable-caching - - Caches the files and folders metadata to reduce API calls - - Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, - as the current implementation doesn't update or clear the cache when there are - external changes. - - The files and folders on ProtonDrive are represented as links with keyrings, - which can be cached to improve performance and be friendly to the API server. - - The cache is currently built for the case when the rclone is the only instance - performing operations to the mount point. The event system, which is the proton - API system that provides visibility of what has changed on the drive, is yet - to be implemented, so updates from other clients won’t be reflected in the - cache. Thus, if there are concurrent clients accessing the same mount point, - then we might have a problem with caching the stale data. - - Properties: - - - Config: enable_caching - - Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING - - Type: bool - - Default: true - - - - ## Limitations - - This backend uses the - [Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which - is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a - fork of the [official repo](https://github.com/ProtonMail/go-proton-api). - - There is no official API documentation available from Proton Drive. But, thanks - to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) - and the web, iOS, and Android client codebases, we don't need to completely - reverse engineer the APIs by observing the web client traffic! - - [proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic - building blocks of API calls and error handling, such as 429 exponential - back-off, but it is pretty much just a barebone interface to the Proton API. - For example, the encryption and decryption of the Proton Drive file are not - provided in this library. - - The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on - top of this quickly. This codebase handles the intricate tasks before and after - calling Proton APIs, particularly the complex encryption scheme, allowing - developers to implement features for other software on top of this codebase. - There are likely quite a few errors in this library, as there isn't official - documentation available. - - # put.io - - Paths are specified as `remote:path` - - put.io paths may be as deep as required, e.g. - `remote:directory/subdirectory`. - - ## Configuration - - The initial setup for put.io involves getting a token from put.io - which you need to do in your browser. `rclone config` walks you - through it. - - Here is an example of how to make a remote called `remote`. First run: - - rclone config - - This will guide you through an interactive setup process: - -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> putio Type of storage to -configure. Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value [snip] XX / Put.io - "putio" [snip] Storage> putio ** See help for putio backend at: -https://rclone.org/putio/ ** - -Remote config Use web browser to automatically authenticate rclone with -remote? * Say Y if the machine running rclone has a web browser you can -use * Say N if running rclone on a (remote) machine without web browser -access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If -your browser doesn't open automatically go to the following link: -http://127.0.0.1:53682/auth Log in and authorize rclone for access -Waiting for code... Got code -------------------- [putio] type = putio -token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y Current remotes: - -Name Type ==== ==== putio putio - -e) Edit existing remote -f) New remote -g) Delete remote -h) Rename remote -i) Copy remote -j) Set configuration password -k) Quit config e/n/d/r/c/s/q> q - - - See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a - machine with no Internet browser available. - - Note that rclone runs a webserver on your local machine to collect the - token as returned from put.io if using web browser to automatically - authenticate. This only - runs from the moment it opens your browser to the moment you get back - the verification code. This is on `http://127.0.0.1:53682/` and this - it may require you to unblock it temporarily if you are running a host - firewall, or use manual mode. - - You can then use it like this, - - List directories in top level of your put.io - - rclone lsd remote: - - List all the files in your put.io - - rclone ls remote: - - To copy a local directory to a put.io directory called backup - - rclone copy /home/source remote:backup - - ### Restricted filename characters - - In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) - the following characters are also replaced: - - | Character | Value | Replacement | - | --------- |:-----:|:-----------:| - | \ | 0x5C | \ | - - Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), - as they can't be used in JSON strings. - - - ### Standard options - - Here are the Standard options specific to putio (Put.io). - - #### --putio-client-id - - OAuth Client Id. - - Leave blank normally. - - Properties: - - - Config: client_id - - Env Var: RCLONE_PUTIO_CLIENT_ID - - Type: string - - Required: false - - #### --putio-client-secret - - OAuth Client Secret. - - Leave blank normally. - - Properties: - - - Config: client_secret - - Env Var: RCLONE_PUTIO_CLIENT_SECRET - - Type: string - - Required: false - - ### Advanced options - - Here are the Advanced options specific to putio (Put.io). - - #### --putio-token - - OAuth Access Token as a JSON blob. - - Properties: - - - Config: token - - Env Var: RCLONE_PUTIO_TOKEN - - Type: string - - Required: false - - #### --putio-auth-url - - Auth server URL. - - Leave blank to use the provider defaults. - - Properties: - - - Config: auth_url - - Env Var: RCLONE_PUTIO_AUTH_URL - - Type: string - - Required: false - - #### --putio-token-url - - Token server url. - - Leave blank to use the provider defaults. - - Properties: - - - Config: token_url - - Env Var: RCLONE_PUTIO_TOKEN_URL - - Type: string - - Required: false - - #### --putio-encoding - - The encoding for the backend. - - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - - Properties: - - - Config: encoding - - Env Var: RCLONE_PUTIO_ENCODING - - Type: Encoding - - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - - - - ## Limitations - - put.io has rate limiting. When you hit a limit, rclone automatically - retries after waiting the amount of time requested by the server. - - If you want to avoid ever hitting these limits, you may use the - `--tpslimit` flag with a low number. Note that the imposed limits - may be different for different operations, and may change over time. - - # Proton Drive - - [Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault - for your files that protects your data. - - This is an rclone backend for Proton Drive which supports the file transfer - features of Proton Drive using the same client-side encryption. - - Due to the fact that Proton Drive doesn't publish its API documentation, this - backend is implemented with best efforts by reading the open-sourced client - source code and observing the Proton Drive traffic in the browser. - - **NB** This backend is currently in Beta. It is believed to be correct - and all the integration tests pass. However the Proton Drive protocol - has evolved over time there may be accounts it is not compatible - with. Please [post on the rclone forum](https://forum.rclone.org/) if - you find an incompatibility. - - Paths are specified as `remote:path` - - Paths may be as deep as required, e.g. `remote:directory/subdirectory`. - - ## Configurations - - Here is an example of how to make a remote called `remote`. First run: - - rclone config - - This will guide you through an interactive setup process: - -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> remote Type of storage to -configure. Choose a number from below, or type in your own value [snip] -XX / Proton Drive  "Proton Drive" [snip] Storage> protondrive User name -user> you@protonmail.com Password. y) Yes type in my own password g) -Generate random password n) No leave this optional password blank y/g/n> -y Enter the password: password: Confirm the password: password: Option -2fa. 2FA code (if the account requires one) Enter a value. Press Enter -to leave empty. 2fa> 123456 Remote config -------------------- [remote] -type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y - - - **NOTE:** The Proton Drive encryption keys need to have been already generated - after a regular login via the browser, otherwise attempting to use the - credentials in `rclone` will fail. - - Once configured you can then use `rclone` like this, - - List directories in top level of your Proton Drive - - rclone lsd remote: - - List all the files in your Proton Drive - - rclone ls remote: - - To copy a local directory to an Proton Drive directory called backup - - rclone copy /home/source remote:backup - - ### Modification times and hashes - - Proton Drive Bridge does not support updating modification times yet. - - The SHA1 hash algorithm is supported. - - ### Restricted filename characters - - Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and - right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) - - ### Duplicated files - - Proton Drive can not have two files with exactly the same name and path. If the - conflict occurs, depending on the advanced config, the file might or might not - be overwritten. - - ### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) - - Please set your mailbox password in the advanced config section. - - ### Caching - - The cache is currently built for the case when the rclone is the only instance - performing operations to the mount point. The event system, which is the proton - API system that provides visibility of what has changed on the drive, is yet - to be implemented, so updates from other clients won’t be reflected in the - cache. Thus, if there are concurrent clients accessing the same mount point, - then we might have a problem with caching the stale data. - - - ### Standard options - - Here are the Standard options specific to protondrive (Proton Drive). - - #### --protondrive-username - - The username of your proton account - - Properties: - - - Config: username - - Env Var: RCLONE_PROTONDRIVE_USERNAME - - Type: string - - Required: true - - #### --protondrive-password - - The password of your proton account. - - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - - Properties: - - - Config: password - - Env Var: RCLONE_PROTONDRIVE_PASSWORD - - Type: string - - Required: true - - #### --protondrive-2fa - - The 2FA code - - The value can also be provided with --protondrive-2fa=000000 - - The 2FA code of your proton drive account if the account is set up with - two-factor authentication - - Properties: - - - Config: 2fa - - Env Var: RCLONE_PROTONDRIVE_2FA - - Type: string - - Required: false - - ### Advanced options - - Here are the Advanced options specific to protondrive (Proton Drive). - - #### --protondrive-mailbox-password - - The mailbox password of your two-password proton account. - - For more information regarding the mailbox password, please check the - following official knowledge base article: - https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password - - - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - - Properties: - - - Config: mailbox_password - - Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD - - Type: string - - Required: false - - #### --protondrive-client-uid - - Client uid key (internal use only) - - Properties: - - - Config: client_uid - - Env Var: RCLONE_PROTONDRIVE_CLIENT_UID - - Type: string - - Required: false - - #### --protondrive-client-access-token - - Client access token key (internal use only) - - Properties: - - - Config: client_access_token - - Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN - - Type: string - - Required: false - - #### --protondrive-client-refresh-token - - Client refresh token key (internal use only) - - Properties: - - - Config: client_refresh_token - - Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN - - Type: string - - Required: false - - #### --protondrive-client-salted-key-pass - - Client salted key pass key (internal use only) - - Properties: - - - Config: client_salted_key_pass - - Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS - - Type: string - - Required: false - - #### --protondrive-encoding - - The encoding for the backend. - - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - - Properties: - - - Config: encoding - - Env Var: RCLONE_PROTONDRIVE_ENCODING - - Type: Encoding - - Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - - #### --protondrive-original-file-size - - Return the file size before encryption - - The size of the encrypted file will be different from (bigger than) the - original file size. Unless there is a reason to return the file size - after encryption is performed, otherwise, set this option to true, as - features like Open() which will need to be supplied with original content - size, will fail to operate properly - - Properties: - - - Config: original_file_size - - Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE - - Type: bool - - Default: true - - #### --protondrive-app-version - - The app version string - - The app version string indicates the client that is currently performing - the API request. This information is required and will be sent with every - API request. - - Properties: - - - Config: app_version - - Env Var: RCLONE_PROTONDRIVE_APP_VERSION - - Type: string - - Default: "macos-drive@1.0.0-alpha.1+rclone" - - #### --protondrive-replace-existing-draft - - Create a new revision when filename conflict is detected - - When a file upload is cancelled or failed before completion, a draft will be - created and the subsequent upload of the same file to the same location will be - reported as a conflict. - - The value can also be set by --protondrive-replace-existing-draft=true - - If the option is set to true, the draft will be replaced and then the upload - operation will restart. If there are other clients also uploading at the same - file location at the same time, the behavior is currently unknown. Need to set - to true for integration tests. - If the option is set to false, an error "a draft exist - usually this means a - file is being uploaded at another client, or, there was a failed upload attempt" - will be returned, and no upload will happen. - - Properties: - - - Config: replace_existing_draft - - Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT - - Type: bool - - Default: false - - #### --protondrive-enable-caching - - Caches the files and folders metadata to reduce API calls - - Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, - as the current implementation doesn't update or clear the cache when there are - external changes. - - The files and folders on ProtonDrive are represented as links with keyrings, - which can be cached to improve performance and be friendly to the API server. - - The cache is currently built for the case when the rclone is the only instance - performing operations to the mount point. The event system, which is the proton - API system that provides visibility of what has changed on the drive, is yet - to be implemented, so updates from other clients won’t be reflected in the - cache. Thus, if there are concurrent clients accessing the same mount point, - then we might have a problem with caching the stale data. - - Properties: - - - Config: enable_caching - - Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING - - Type: bool - - Default: true - - - - ## Limitations - - This backend uses the - [Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which - is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a - fork of the [official repo](https://github.com/ProtonMail/go-proton-api). - - There is no official API documentation available from Proton Drive. But, thanks - to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) - and the web, iOS, and Android client codebases, we don't need to completely - reverse engineer the APIs by observing the web client traffic! - - [proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic - building blocks of API calls and error handling, such as 429 exponential - back-off, but it is pretty much just a barebone interface to the Proton API. - For example, the encryption and decryption of the Proton Drive file are not - provided in this library. - - The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on - top of this quickly. This codebase handles the intricate tasks before and after - calling Proton APIs, particularly the complex encryption scheme, allowing - developers to implement features for other software on top of this codebase. - There are likely quite a few errors in this library, as there isn't official - documentation available. - - # Seafile - - This is a backend for the [Seafile](https://www.seafile.com/) storage service: - - It works with both the free community edition or the professional edition. - - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - - Encrypted libraries are also supported. - - It supports 2FA enabled users - - Using a Library API Token is **not** supported - - ## Configuration - - There are two distinct modes you can setup your remote: - - you point your remote to the **root of the server**, meaning you don't specify a library during the configuration: - Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`. - - you point your remote to a specific library during the configuration: - Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) - - ### Configuration in root mode - - Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run - - rclone config - - This will guide you through an interactive setup process. To authenticate - you will need the URL of your server, your email (or username) and your password. - -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> seafile Type of storage to -configure. Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value [snip] XX / -Seafile  "seafile" [snip] Storage> seafile ** See help for seafile -backend at: https://rclone.org/seafile/ ** - -URL of seafile host to connect to Enter a string value. Press Enter for -the default (""). Choose a number from below, or type in your own value -1 / Connect to cloud.seafile.com  "https://cloud.seafile.com/" url> -http://my.seafile.server/ User name (usually email address) Enter a -string value. Press Enter for the default (""). user> me@example.com -Password y) Yes type in my own password g) Generate random password n) -No leave this optional password blank (default) y/g> y Enter the -password: password: Confirm the password: password: Two-factor -authentication ('true' if the account has 2FA enabled) Enter a boolean -value (true or false). Press Enter for the default ("false"). 2fa> false -Name of the library. Leave blank to access all non-encrypted libraries. -Enter a string value. Press Enter for the default (""). library> Library -password (for encrypted libraries only). Leave blank if you pass it -through the command line. y) Yes type in my own password g) Generate -random password n) No leave this optional password blank (default) -y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n -Remote config Two-factor authentication is not enabled on this account. --------------------- [seafile] type = seafile url = -http://my.seafile.server/ user = me@example.com pass = *** ENCRYPTED *** -2fa = false -------------------- y) Yes this is OK (default) e) Edit -this remote d) Delete this remote y/e/d> y - - - This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this: - - See all libraries - - rclone lsd seafile: - - Create a new library - - rclone mkdir seafile:library - - List the contents of a library - - rclone ls seafile:library - - Sync `/home/local/directory` to the remote library, deleting any - excess files in the library. - - rclone sync --interactive /home/local/directory seafile:library - - ### Configuration in library mode - - Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: - -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> seafile Type of storage to -configure. Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value [snip] XX / -Seafile  "seafile" [snip] Storage> seafile ** See help for seafile -backend at: https://rclone.org/seafile/ ** - -URL of seafile host to connect to Enter a string value. Press Enter for -the default (""). Choose a number from below, or type in your own value -1 / Connect to cloud.seafile.com  "https://cloud.seafile.com/" url> -http://my.seafile.server/ User name (usually email address) Enter a -string value. Press Enter for the default (""). user> me@example.com -Password y) Yes type in my own password g) Generate random password n) -No leave this optional password blank (default) y/g> y Enter the -password: password: Confirm the password: password: Two-factor -authentication ('true' if the account has 2FA enabled) Enter a boolean -value (true or false). Press Enter for the default ("false"). 2fa> true -Name of the library. Leave blank to access all non-encrypted libraries. -Enter a string value. Press Enter for the default (""). library> My -Library Library password (for encrypted libraries only). Leave blank if -you pass it through the command line. y) Yes type in my own password g) -Generate random password n) No leave this optional password blank -(default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) -y/n> n Remote config Two-factor authentication: please enter your 2FA -code 2fa code> 123456 Authenticating... Success! -------------------- -[seafile] type = seafile url = http://my.seafile.server/ user = -me@example.com pass = 2fa = true library = My Library --------------------- y) Yes this is OK (default) e) Edit this remote d) -Delete this remote y/e/d> y - - - You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once. - - You specified `My Library` during the configuration. The root of the remote is pointing at the - root of the library `My Library`: - - See all files in the library: - - rclone lsd seafile: - - Create a new directory inside the library - - rclone mkdir seafile:directory - - List the contents of a directory - - rclone ls seafile:directory - - Sync `/home/local/directory` to the remote library, deleting any - excess files in the library. - - rclone sync --interactive /home/local/directory seafile: - - - ### --fast-list - - Seafile version 7+ supports `--fast-list` which allows you to use fewer - transactions in exchange for more memory. See the [rclone - docs](https://rclone.org/docs/#fast-list) for more details. - Please note this is not supported on seafile server version 6.x - - - ### Restricted filename characters - - In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) - the following characters are also replaced: - - | Character | Value | Replacement | - | --------- |:-----:|:-----------:| - | / | 0x2F | / | - | " | 0x22 | " | - | \ | 0x5C | \ | - - Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), - as they can't be used in JSON strings. - - ### Seafile and rclone link - - Rclone supports generating share links for non-encrypted libraries only. - They can either be for a file or a directory: - -rclone link seafile:seafile-tutorial.doc -http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ - - - or if run on a directory you will get: - -rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ - - - Please note a share link is unique for each file or directory. If you run a link command on a file/dir - that has already been shared, you will get the exact same link. - - ### Compatibility - - It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions: - - 6.3.4 community edition - - 7.0.5 community edition - - 7.1.3 community edition - - 9.0.10 community edition - - Versions below 6.0 are not supported. - Versions between 6.0 and 6.3 haven't been tested and might not work properly. - - Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server. - - - ### Standard options - - Here are the Standard options specific to seafile (seafile). - - #### --seafile-url - - URL of seafile host to connect to. - - Properties: - - - Config: url - - Env Var: RCLONE_SEAFILE_URL - - Type: string - - Required: true - - Examples: - - "https://cloud.seafile.com/" - - Connect to cloud.seafile.com. - - #### --seafile-user - - User name (usually email address). - - Properties: - - - Config: user - - Env Var: RCLONE_SEAFILE_USER - - Type: string - - Required: true - - #### --seafile-pass +This will guide you through an interactive setup process: + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / Proton Drive + \ "Proton Drive" + [snip] + Storage> protondrive + User name + user> you@protonmail.com Password. + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank + y/g/n> y + Enter the password: + password: + Confirm the password: + password: + Option 2fa. + 2FA code (if the account requires one) + Enter a value. Press Enter to leave empty. + 2fa> 123456 + Remote config + -------------------- + [remote] + type = protondrive + user = you@protonmail.com + pass = *** ENCRYPTED *** + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). +NOTE: The Proton Drive encryption keys need to have been already +generated after a regular login via the browser, otherwise attempting to +use the credentials in rclone will fail. - Properties: +Once configured you can then use rclone like this, - - Config: pass - - Env Var: RCLONE_SEAFILE_PASS - - Type: string - - Required: false +List directories in top level of your Proton Drive - #### --seafile-2fa + rclone lsd remote: - Two-factor authentication ('true' if the account has 2FA enabled). +List all the files in your Proton Drive - Properties: + rclone ls remote: - - Config: 2fa - - Env Var: RCLONE_SEAFILE_2FA - - Type: bool - - Default: false +To copy a local directory to an Proton Drive directory called backup - #### --seafile-library + rclone copy /home/source remote:backup - Name of the library. +Modification times and hashes - Leave blank to access all non-encrypted libraries. +Proton Drive Bridge does not support updating modification times yet. - Properties: +The SHA1 hash algorithm is supported. - - Config: library - - Env Var: RCLONE_SEAFILE_LIBRARY - - Type: string - - Required: false +Restricted filename characters - #### --seafile-library-key +Invalid UTF-8 bytes will be replaced, also left and right spaces will be +removed (code reference) - Library password (for encrypted libraries only). +Duplicated files - Leave blank if you pass it through the command line. +Proton Drive can not have two files with exactly the same name and path. +If the conflict occurs, depending on the advanced config, the file might +or might not be overwritten. - **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). +Mailbox password - Properties: +Please set your mailbox password in the advanced config section. - - Config: library_key - - Env Var: RCLONE_SEAFILE_LIBRARY_KEY - - Type: string - - Required: false +Caching - #### --seafile-auth-token +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. The event system, +which is the proton API system that provides visibility of what has +changed on the drive, is yet to be implemented, so updates from other +clients won’t be reflected in the cache. Thus, if there are concurrent +clients accessing the same mount point, then we might have a problem +with caching the stale data. - Authentication token. +Standard options - Properties: +Here are the Standard options specific to protondrive (Proton Drive). - - Config: auth_token - - Env Var: RCLONE_SEAFILE_AUTH_TOKEN - - Type: string - - Required: false +--protondrive-username - ### Advanced options +The username of your proton account - Here are the Advanced options specific to seafile (seafile). +Properties: - #### --seafile-create-library +- Config: username +- Env Var: RCLONE_PROTONDRIVE_USERNAME +- Type: string +- Required: true - Should rclone create a library if it doesn't exist. +--protondrive-password - Properties: +The password of your proton account. - - Config: create_library - - Env Var: RCLONE_SEAFILE_CREATE_LIBRARY - - Type: bool - - Default: false +NB Input to this must be obscured - see rclone obscure. - #### --seafile-encoding +Properties: - The encoding for the backend. +- Config: password +- Env Var: RCLONE_PROTONDRIVE_PASSWORD +- Type: string +- Required: true - See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. +--protondrive-2fa - Properties: +The 2FA code - - Config: encoding - - Env Var: RCLONE_SEAFILE_ENCODING - - Type: Encoding - - Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 +The value can also be provided with --protondrive-2fa=000000 +The 2FA code of your proton drive account if the account is set up with +two-factor authentication +Properties: - # SFTP +- Config: 2fa +- Env Var: RCLONE_PROTONDRIVE_2FA +- Type: string +- Required: false - SFTP is the [Secure (or SSH) File Transfer - Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). +Advanced options - The SFTP backend can be used with a number of different providers: +Here are the Advanced options specific to protondrive (Proton Drive). +--protondrive-mailbox-password - - Hetzner Storage Box - - rsync.net +The mailbox password of your two-password proton account. +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password - SFTP runs over SSH v2 and is installed as standard with most modern - SSH installations. +NB Input to this must be obscured - see rclone obscure. - Paths are specified as `remote:path`. If the path does not begin with - a `/` it is relative to the home directory of the user. An empty path - `remote:` refers to the user's home directory. For example, `rclone lsd remote:` - would list the home directory of the user configured in the rclone remote config - (`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root - directory for remote machine (i.e. `/`) +Properties: - Note that some SFTP servers will need the leading / - Synology is a - good example of this. rsync.net and Hetzner, on the other hand, requires users to - OMIT the leading /. +- Config: mailbox_password +- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +- Type: string +- Required: false - Note that by default rclone will try to execute shell commands on - the server, see [shell access considerations](#shell-access-considerations). +--protondrive-client-uid - ## Configuration +Client uid key (internal use only) - Here is an example of making an SFTP configuration. First run +Properties: - rclone config +- Config: client_uid +- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +- Type: string +- Required: false - This will guide you through an interactive setup process. +--protondrive-client-access-token -No remotes found, make a new one? n) New remote s) Set configuration -password q) Quit config n/s/q> n name> remote Type of storage to -configure. Choose a number from below, or type in your own value [snip] -XX / SSH/SFTP  "sftp" [snip] Storage> sftp SSH host to connect to Choose -a number from below, or type in your own value 1 / Connect to -example.com  "example.com" host> example.com SSH username Enter a string -value. Press Enter for the default ("$USER"). user> sftpuser SSH port -number Enter a signed integer. Press Enter for the default (22). port> -SSH password, leave blank to use ssh-agent. y) Yes type in my own -password g) Generate random password n) No leave this optional password -blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave -blank to use ssh-agent. key_file> Remote config -------------------- -[remote] host = example.com user = sftpuser port = pass = key_file = --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +Client access token key (internal use only) +Properties: - This remote is called `remote` and can now be used like this: +- Config: client_access_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +- Type: string +- Required: false - See all directories in the home directory +--protondrive-client-refresh-token - rclone lsd remote: +Client refresh token key (internal use only) - See all directories in the root directory +Properties: - rclone lsd remote:/ +- Config: client_refresh_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +- Type: string +- Required: false - Make a new directory +--protondrive-client-salted-key-pass - rclone mkdir remote:path/to/directory +Client salted key pass key (internal use only) - List the contents of a directory +Properties: - rclone ls remote:path/to/directory +- Config: client_salted_key_pass +- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +- Type: string +- Required: false - Sync `/home/local/directory` to the remote directory, deleting any - excess files in the directory. +--protondrive-encoding - rclone sync --interactive /home/local/directory remote:directory +The encoding for the backend. - Mount the remote path `/srv/www-data/` to the local path - `/mnt/www-data` +See the encoding section in the overview for more info. - rclone mount remote:/srv/www-data/ /mnt/www-data +Properties: - ### SSH Authentication +- Config: encoding +- Env Var: RCLONE_PROTONDRIVE_ENCODING +- Type: Encoding +- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - The SFTP remote supports three authentication methods: +--protondrive-original-file-size - * Password - * Key file, including certificate signed keys - * ssh-agent +Return the file size before encryption - Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`. - Only unencrypted OpenSSH or PEM encrypted files are supported. +The size of the encrypted file will be different from (bigger than) the +original file size. Unless there is a reason to return the file size +after encryption is performed, otherwise, set this option to true, as +features like Open() which will need to be supplied with original +content size, will fail to operate properly - The key file can be specified in either an external file (key_file) or contained within the - rclone config file (key_pem). If using key_pem in the config file, the entry should be on a - single line with new line ('\n' or '\r\n') separating lines. i.e. +Properties: - key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- +- Config: original_file_size +- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +- Type: bool +- Default: true - This will generate it correctly for key_pem for use in the config: +--protondrive-app-version - awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa +The app version string - If you don't specify `pass`, `key_file`, or `key_pem` or `ask_password` then - rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent` - to force the usage of an ssh-agent. In this case `key_file` or `key_pem` can - also be specified to force the usage of a specific key in the ssh-agent. +The app version string indicates the client that is currently performing +the API request. This information is required and will be sent with +every API request. - Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. +Properties: - If you set the `ask_password` option, rclone will prompt for a password when - needed and no password has been configured. +- Config: app_version +- Env Var: RCLONE_PROTONDRIVE_APP_VERSION +- Type: string +- Default: "macos-drive@1.0.0-alpha.1+rclone" - #### Certificate-signed keys +--protondrive-replace-existing-draft - With traditional key-based authentication, you configure your private key only, - and the public key built into it will be used during the authentication process. +Create a new revision when filename conflict is detected - If you have a certificate you may use it to sign your public key, creating a - separate SSH user certificate that should be used instead of the plain public key - extracted from the private key. Then you must provide the path to the - user certificate public key file in `pubkey_file`. +When a file upload is cancelled or failed before completion, a draft +will be created and the subsequent upload of the same file to the same +location will be reported as a conflict. - Note: This is not the traditional public key paired with your private key, - typically saved as `/home/$USER/.ssh/id_rsa.pub`. Setting this path in - `pubkey_file` will not work. +The value can also be set by --protondrive-replace-existing-draft=true - Example: +If the option is set to true, the draft will be replaced and then the +upload operation will restart. If there are other clients also uploading +at the same file location at the same time, the behavior is currently +unknown. Need to set to true for integration tests. If the option is set +to false, an error "a draft exist - usually this means a file is being +uploaded at another client, or, there was a failed upload attempt" will +be returned, and no upload will happen. -[remote] type = sftp host = example.com user = sftpuser key_file = -~/id_rsa pubkey_file = ~/id_rsa-cert.pub +Properties: +- Config: replace_existing_draft +- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +- Type: bool +- Default: false - If you concatenate a cert with a private key then you can specify the - merged file in both places. +--protondrive-enable-caching - Note: the cert must come first in the file. e.g. +Caches the files and folders metadata to reduce API calls + +Notice: If you are mounting ProtonDrive as a VFS, please disable this +feature, as the current implementation doesn't update or clear the cache +when there are external changes. + +The files and folders on ProtonDrive are represented as links with +keyrings, which can be cached to improve performance and be friendly to +the API server. + +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. The event system, +which is the proton API system that provides visibility of what has +changed on the drive, is yet to be implemented, so updates from other +clients won’t be reflected in the cache. Thus, if there are concurrent +clients accessing the same mount point, then we might have a problem +with caching the stale data. + +Properties: + +- Config: enable_caching +- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +- Type: bool +- Default: true + +--protondrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +- Type: string +- Required: false + +Limitations + +This backend uses the Proton-API-Bridge, which is based on +go-proton-api, a fork of the official repo. + +There is no official API documentation available from Proton Drive. But, +thanks to Proton open sourcing proton-go-api and the web, iOS, and +Android client codebases, we don't need to completely reverse engineer +the APIs by observing the web client traffic! + +proton-go-api provides the basic building blocks of API calls and error +handling, such as 429 exponential back-off, but it is pretty much just a +barebone interface to the Proton API. For example, the encryption and +decryption of the Proton Drive file are not provided in this library. + +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be +built on top of this quickly. This codebase handles the intricate tasks +before and after calling Proton APIs, particularly the complex +encryption scheme, allowing developers to implement features for other +software on top of this codebase. There are likely quite a few errors in +this library, as there isn't official documentation available. + +put.io + +Paths are specified as remote:path + +put.io paths may be as deep as required, e.g. +remote:directory/subdirectory. + +Configuration + +The initial setup for put.io involves getting a token from put.io which +you need to do in your browser. rclone config walks you through it. + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> putio + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + [snip] + XX / Put.io + \ "putio" + [snip] + Storage> putio + ** See help for putio backend at: https://rclone.org/putio/ ** + + Remote config + Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access + If not sure try Y. If Y failed, try N. + y) Yes + n) No + y/n> y + If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth + Log in and authorize rclone for access + Waiting for code... + Got code + -------------------- + [putio] + type = putio + token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + putio putio + + e) Edit existing remote + n) New remote + d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + e/n/d/r/c/s/q> q + +See the remote setup docs for how to set it up on a machine with no +Internet browser available. + +Note that rclone runs a webserver on your local machine to collect the +token as returned from put.io if using web browser to automatically +authenticate. This only runs from the moment it opens your browser to +the moment you get back the verification code. This is on +http://127.0.0.1:53682/ and this it may require you to unblock it +temporarily if you are running a host firewall, or use manual mode. + +You can then use it like this, + +List directories in top level of your put.io + + rclone lsd remote: + +List all the files in your put.io + + rclone ls remote: + +To copy a local directory to a put.io directory called backup + + rclone copy /home/source remote:backup + +Restricted filename characters + +In addition to the default restricted characters set the following +characters are also replaced: + + Character Value Replacement + ----------- ------- ------------- + \ 0x5C \ + +Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON +strings. + +Standard options + +Here are the Standard options specific to putio (Put.io). + +--putio-client-id + +OAuth Client Id. + +Leave blank normally. + +Properties: + +- Config: client_id +- Env Var: RCLONE_PUTIO_CLIENT_ID +- Type: string +- Required: false + +--putio-client-secret + +OAuth Client Secret. + +Leave blank normally. + +Properties: + +- Config: client_secret +- Env Var: RCLONE_PUTIO_CLIENT_SECRET +- Type: string +- Required: false + +Advanced options + +Here are the Advanced options specific to putio (Put.io). + +--putio-token + +OAuth Access Token as a JSON blob. + +Properties: + +- Config: token +- Env Var: RCLONE_PUTIO_TOKEN +- Type: string +- Required: false + +--putio-auth-url + +Auth server URL. + +Leave blank to use the provider defaults. + +Properties: + +- Config: auth_url +- Env Var: RCLONE_PUTIO_AUTH_URL +- Type: string +- Required: false + +--putio-token-url + +Token server url. + +Leave blank to use the provider defaults. + +Properties: + +- Config: token_url +- Env Var: RCLONE_PUTIO_TOKEN_URL +- Type: string +- Required: false + +--putio-encoding + +The encoding for the backend. + +See the encoding section in the overview for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_PUTIO_ENCODING +- Type: Encoding +- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot + +--putio-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PUTIO_DESCRIPTION +- Type: string +- Required: false + +Limitations + +put.io has rate limiting. When you hit a limit, rclone automatically +retries after waiting the amount of time requested by the server. + +If you want to avoid ever hitting these limits, you may use the +--tpslimit flag with a low number. Note that the imposed limits may be +different for different operations, and may change over time. + +Proton Drive + +Proton Drive is an end-to-end encrypted Swiss vault for your files that +protects your data. + +This is an rclone backend for Proton Drive which supports the file +transfer features of Proton Drive using the same client-side encryption. + +Due to the fact that Proton Drive doesn't publish its API documentation, +this backend is implemented with best efforts by reading the +open-sourced client source code and observing the Proton Drive traffic +in the browser. + +NB This backend is currently in Beta. It is believed to be correct and +all the integration tests pass. However the Proton Drive protocol has +evolved over time there may be accounts it is not compatible with. +Please post on the rclone forum if you find an incompatibility. + +Paths are specified as remote:path + +Paths may be as deep as required, e.g. remote:directory/subdirectory. + +Configurations + +Here is an example of how to make a remote called remote. First run: + + rclone config + +This will guide you through an interactive setup process: + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / Proton Drive + \ "Proton Drive" + [snip] + Storage> protondrive + User name + user> you@protonmail.com + Password. + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank + y/g/n> y + Enter the password: + password: + Confirm the password: + password: + Option 2fa. + 2FA code (if the account requires one) + Enter a value. Press Enter to leave empty. + 2fa> 123456 + Remote config + -------------------- + [remote] + type = protondrive + user = you@protonmail.com + pass = *** ENCRYPTED *** + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +NOTE: The Proton Drive encryption keys need to have been already +generated after a regular login via the browser, otherwise attempting to +use the credentials in rclone will fail. + +Once configured you can then use rclone like this, + +List directories in top level of your Proton Drive + + rclone lsd remote: + +List all the files in your Proton Drive + + rclone ls remote: + +To copy a local directory to an Proton Drive directory called backup + + rclone copy /home/source remote:backup + +Modification times and hashes + +Proton Drive Bridge does not support updating modification times yet. + +The SHA1 hash algorithm is supported. + +Restricted filename characters + +Invalid UTF-8 bytes will be replaced, also left and right spaces will be +removed (code reference) + +Duplicated files + +Proton Drive can not have two files with exactly the same name and path. +If the conflict occurs, depending on the advanced config, the file might +or might not be overwritten. + +Mailbox password + +Please set your mailbox password in the advanced config section. + +Caching + +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. The event system, +which is the proton API system that provides visibility of what has +changed on the drive, is yet to be implemented, so updates from other +clients won’t be reflected in the cache. Thus, if there are concurrent +clients accessing the same mount point, then we might have a problem +with caching the stale data. + +Standard options + +Here are the Standard options specific to protondrive (Proton Drive). + +--protondrive-username + +The username of your proton account + +Properties: + +- Config: username +- Env Var: RCLONE_PROTONDRIVE_USERNAME +- Type: string +- Required: true + +--protondrive-password + +The password of your proton account. + +NB Input to this must be obscured - see rclone obscure. + +Properties: + +- Config: password +- Env Var: RCLONE_PROTONDRIVE_PASSWORD +- Type: string +- Required: true + +--protondrive-2fa + +The 2FA code + +The value can also be provided with --protondrive-2fa=000000 + +The 2FA code of your proton drive account if the account is set up with +two-factor authentication + +Properties: + +- Config: 2fa +- Env Var: RCLONE_PROTONDRIVE_2FA +- Type: string +- Required: false + +Advanced options + +Here are the Advanced options specific to protondrive (Proton Drive). + +--protondrive-mailbox-password + +The mailbox password of your two-password proton account. + +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password + +NB Input to this must be obscured - see rclone obscure. + +Properties: + +- Config: mailbox_password +- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +- Type: string +- Required: false + +--protondrive-client-uid + +Client uid key (internal use only) + +Properties: + +- Config: client_uid +- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +- Type: string +- Required: false + +--protondrive-client-access-token + +Client access token key (internal use only) + +Properties: + +- Config: client_access_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +- Type: string +- Required: false + +--protondrive-client-refresh-token + +Client refresh token key (internal use only) + +Properties: + +- Config: client_refresh_token +- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +- Type: string +- Required: false + +--protondrive-client-salted-key-pass + +Client salted key pass key (internal use only) + +Properties: + +- Config: client_salted_key_pass +- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +- Type: string +- Required: false + +--protondrive-encoding + +The encoding for the backend. + +See the encoding section in the overview for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_PROTONDRIVE_ENCODING +- Type: Encoding +- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot + +--protondrive-original-file-size + +Return the file size before encryption + +The size of the encrypted file will be different from (bigger than) the +original file size. Unless there is a reason to return the file size +after encryption is performed, otherwise, set this option to true, as +features like Open() which will need to be supplied with original +content size, will fail to operate properly + +Properties: + +- Config: original_file_size +- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +- Type: bool +- Default: true + +--protondrive-app-version + +The app version string + +The app version string indicates the client that is currently performing +the API request. This information is required and will be sent with +every API request. + +Properties: + +- Config: app_version +- Env Var: RCLONE_PROTONDRIVE_APP_VERSION +- Type: string +- Default: "macos-drive@1.0.0-alpha.1+rclone" + +--protondrive-replace-existing-draft + +Create a new revision when filename conflict is detected + +When a file upload is cancelled or failed before completion, a draft +will be created and the subsequent upload of the same file to the same +location will be reported as a conflict. + +The value can also be set by --protondrive-replace-existing-draft=true + +If the option is set to true, the draft will be replaced and then the +upload operation will restart. If there are other clients also uploading +at the same file location at the same time, the behavior is currently +unknown. Need to set to true for integration tests. If the option is set +to false, an error "a draft exist - usually this means a file is being +uploaded at another client, or, there was a failed upload attempt" will +be returned, and no upload will happen. + +Properties: + +- Config: replace_existing_draft +- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +- Type: bool +- Default: false + +--protondrive-enable-caching + +Caches the files and folders metadata to reduce API calls + +Notice: If you are mounting ProtonDrive as a VFS, please disable this +feature, as the current implementation doesn't update or clear the cache +when there are external changes. + +The files and folders on ProtonDrive are represented as links with +keyrings, which can be cached to improve performance and be friendly to +the API server. + +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. The event system, +which is the proton API system that provides visibility of what has +changed on the drive, is yet to be implemented, so updates from other +clients won’t be reflected in the cache. Thus, if there are concurrent +clients accessing the same mount point, then we might have a problem +with caching the stale data. + +Properties: + +- Config: enable_caching +- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +- Type: bool +- Default: true + +--protondrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +- Type: string +- Required: false + +Limitations + +This backend uses the Proton-API-Bridge, which is based on +go-proton-api, a fork of the official repo. + +There is no official API documentation available from Proton Drive. But, +thanks to Proton open sourcing proton-go-api and the web, iOS, and +Android client codebases, we don't need to completely reverse engineer +the APIs by observing the web client traffic! + +proton-go-api provides the basic building blocks of API calls and error +handling, such as 429 exponential back-off, but it is pretty much just a +barebone interface to the Proton API. For example, the encryption and +decryption of the Proton Drive file are not provided in this library. + +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be +built on top of this quickly. This codebase handles the intricate tasks +before and after calling Proton APIs, particularly the complex +encryption scheme, allowing developers to implement features for other +software on top of this codebase. There are likely quite a few errors in +this library, as there isn't official documentation available. + +Seafile + +This is a backend for the Seafile storage service: - It works with both +the free community edition or the professional edition. - Seafile +versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries +are also supported. - It supports 2FA enabled users - Using a Library +API Token is not supported + +Configuration + +There are two distinct modes you can setup your remote: - you point your +remote to the root of the server, meaning you don't specify a library +during the configuration: Paths are specified as remote:library. You may +put subdirectories in too, e.g. remote:library/path/to/dir. - you point +your remote to a specific library during the configuration: Paths are +specified as remote:path/to/dir. This is the recommended mode when using +encrypted libraries. (This mode is possibly slightly faster than the +root mode) + +Configuration in root mode + +Here is an example of making a seafile configuration for a user with no +two-factor authentication. First run + + rclone config + +This will guide you through an interactive setup process. To +authenticate you will need the URL of your server, your email (or +username) and your password. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> seafile + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + [snip] + XX / Seafile + \ "seafile" + [snip] + Storage> seafile + ** See help for seafile backend at: https://rclone.org/seafile/ ** + + URL of seafile host to connect to + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Connect to cloud.seafile.com + \ "https://cloud.seafile.com/" + url> http://my.seafile.server/ + User name (usually email address) + Enter a string value. Press Enter for the default (""). + user> me@example.com + Password + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank (default) + y/g> y + Enter the password: + password: + Confirm the password: + password: + Two-factor authentication ('true' if the account has 2FA enabled) + Enter a boolean value (true or false). Press Enter for the default ("false"). + 2fa> false + Name of the library. Leave blank to access all non-encrypted libraries. + Enter a string value. Press Enter for the default (""). + library> + Library password (for encrypted libraries only). Leave blank if you pass it through the command line. + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank (default) + y/g/n> n + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + Two-factor authentication is not enabled on this account. + -------------------- + [seafile] + type = seafile + url = http://my.seafile.server/ + user = me@example.com + pass = *** ENCRYPTED *** + 2fa = false + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +This remote is called seafile. It's pointing to the root of your seafile +server and can now be used like this: + +See all libraries + + rclone lsd seafile: + +Create a new library + + rclone mkdir seafile:library + +List the contents of a library + + rclone ls seafile:library + +Sync /home/local/directory to the remote library, deleting any excess +files in the library. + + rclone sync --interactive /home/local/directory seafile:library + +Configuration in library mode + +Here's an example of a configuration in library mode with a user that +has the two-factor authentication enabled. Your 2FA code will be asked +at the end of the configuration, and will attempt to authenticate you: + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> seafile + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + [snip] + XX / Seafile + \ "seafile" + [snip] + Storage> seafile + ** See help for seafile backend at: https://rclone.org/seafile/ ** + + URL of seafile host to connect to + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Connect to cloud.seafile.com + \ "https://cloud.seafile.com/" + url> http://my.seafile.server/ + User name (usually email address) + Enter a string value. Press Enter for the default (""). + user> me@example.com + Password + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank (default) + y/g> y + Enter the password: + password: + Confirm the password: + password: + Two-factor authentication ('true' if the account has 2FA enabled) + Enter a boolean value (true or false). Press Enter for the default ("false"). + 2fa> true + Name of the library. Leave blank to access all non-encrypted libraries. + Enter a string value. Press Enter for the default (""). + library> My Library + Library password (for encrypted libraries only). Leave blank if you pass it through the command line. + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank (default) + y/g/n> n + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + Two-factor authentication: please enter your 2FA code + 2fa code> 123456 + Authenticating... + Success! + -------------------- + [seafile] + type = seafile + url = http://my.seafile.server/ + user = me@example.com + pass = + 2fa = true + library = My Library + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +You'll notice your password is blank in the configuration. It's because +we only need the password to authenticate you once. + +You specified My Library during the configuration. The root of the +remote is pointing at the root of the library My Library: + +See all files in the library: + + rclone lsd seafile: + +Create a new directory inside the library + + rclone mkdir seafile:directory + +List the contents of a directory + + rclone ls seafile:directory + +Sync /home/local/directory to the remote library, deleting any excess +files in the library. + + rclone sync --interactive /home/local/directory seafile: + +--fast-list + +Seafile version 7+ supports --fast-list which allows you to use fewer +transactions in exchange for more memory. See the rclone docs for more +details. Please note this is not supported on seafile server version 6.x + +Restricted filename characters + +In addition to the default restricted characters set the following +characters are also replaced: + + Character Value Replacement + ----------- ------- ------------- + / 0x2F / + " 0x22 " + \ 0x5C \ + +Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON +strings. + +Seafile and rclone link + +Rclone supports generating share links for non-encrypted libraries only. +They can either be for a file or a directory: + + rclone link seafile:seafile-tutorial.doc + http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ + +or if run on a directory you will get: + + rclone link seafile:dir + http://my.seafile.server/d/9ea2455f6f55478bbb0d/ + +Please note a share link is unique for each file or directory. If you +run a link command on a file/dir that has already been shared, you will +get the exact same link. + +Compatibility + +It has been actively developed using the seafile docker image of these +versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 +community edition - 9.0.10 community edition + +Versions below 6.0 are not supported. Versions between 6.0 and 6.3 +haven't been tested and might not work properly. + +Each new version of rclone is automatically tested against the latest +docker image of the seafile community server. + +Standard options + +Here are the Standard options specific to seafile (seafile). + +--seafile-url + +URL of seafile host to connect to. + +Properties: + +- Config: url +- Env Var: RCLONE_SEAFILE_URL +- Type: string +- Required: true +- Examples: + - "https://cloud.seafile.com/" + - Connect to cloud.seafile.com. + +--seafile-user + +User name (usually email address). + +Properties: + +- Config: user +- Env Var: RCLONE_SEAFILE_USER +- Type: string +- Required: true + +--seafile-pass + +Password. + +NB Input to this must be obscured - see rclone obscure. + +Properties: + +- Config: pass +- Env Var: RCLONE_SEAFILE_PASS +- Type: string +- Required: false + +--seafile-2fa + +Two-factor authentication ('true' if the account has 2FA enabled). + +Properties: + +- Config: 2fa +- Env Var: RCLONE_SEAFILE_2FA +- Type: bool +- Default: false + +--seafile-library + +Name of the library. + +Leave blank to access all non-encrypted libraries. + +Properties: + +- Config: library +- Env Var: RCLONE_SEAFILE_LIBRARY +- Type: string +- Required: false + +--seafile-library-key + +Library password (for encrypted libraries only). + +Leave blank if you pass it through the command line. + +NB Input to this must be obscured - see rclone obscure. + +Properties: + +- Config: library_key +- Env Var: RCLONE_SEAFILE_LIBRARY_KEY +- Type: string +- Required: false + +--seafile-auth-token + +Authentication token. + +Properties: + +- Config: auth_token +- Env Var: RCLONE_SEAFILE_AUTH_TOKEN +- Type: string +- Required: false + +Advanced options + +Here are the Advanced options specific to seafile (seafile). + +--seafile-create-library + +Should rclone create a library if it doesn't exist. + +Properties: + +- Config: create_library +- Env Var: RCLONE_SEAFILE_CREATE_LIBRARY +- Type: bool +- Default: false + +--seafile-encoding + +The encoding for the backend. + +See the encoding section in the overview for more info. + +Properties: + +- Config: encoding +- Env Var: RCLONE_SEAFILE_ENCODING +- Type: Encoding +- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 + +--seafile-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SEAFILE_DESCRIPTION +- Type: string +- Required: false + +SFTP + +SFTP is the Secure (or SSH) File Transfer Protocol. + +The SFTP backend can be used with a number of different providers: + +- Hetzner Storage Box +- rsync.net + +SFTP runs over SSH v2 and is installed as standard with most modern SSH +installations. + +Paths are specified as remote:path. If the path does not begin with a / +it is relative to the home directory of the user. An empty path remote: +refers to the user's home directory. For example, rclone lsd remote: +would list the home directory of the user configured in the rclone +remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would +list the root directory for remote machine (i.e. /) + +Note that some SFTP servers will need the leading / - Synology is a good +example of this. rsync.net and Hetzner, on the other hand, requires +users to OMIT the leading /. + +Note that by default rclone will try to execute shell commands on the +server, see shell access considerations. + +Configuration + +Here is an example of making an SFTP configuration. First run + + rclone config + +This will guide you through an interactive setup process. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + name> remote + Type of storage to configure. + Choose a number from below, or type in your own value + [snip] + XX / SSH/SFTP + \ "sftp" + [snip] + Storage> sftp + SSH host to connect to + Choose a number from below, or type in your own value + 1 / Connect to example.com + \ "example.com" + host> example.com + SSH username + Enter a string value. Press Enter for the default ("$USER"). + user> sftpuser + SSH port number + Enter a signed integer. Press Enter for the default (22). + port> + SSH password, leave blank to use ssh-agent. + y) Yes type in my own password + g) Generate random password + n) No leave this optional password blank + y/g/n> n + Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. + key_file> + Remote config + -------------------- + [remote] + host = example.com + user = sftpuser + port = + pass = + key_file = + -------------------- + y) Yes this is OK + e) Edit this remote + d) Delete this remote + y/e/d> y + +This remote is called remote and can now be used like this: + +See all directories in the home directory + + rclone lsd remote: + +See all directories in the root directory + + rclone lsd remote:/ + +Make a new directory + + rclone mkdir remote:path/to/directory + +List the contents of a directory + + rclone ls remote:path/to/directory + +Sync /home/local/directory to the remote directory, deleting any excess +files in the directory. + + rclone sync --interactive /home/local/directory remote:directory + +Mount the remote path /srv/www-data/ to the local path /mnt/www-data + + rclone mount remote:/srv/www-data/ /mnt/www-data + +SSH Authentication + +The SFTP remote supports three authentication methods: + +- Password +- Key file, including certificate signed keys +- ssh-agent + +Key files should be PEM-encoded private key files. For instance +/home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files +are supported. + +The key file can be specified in either an external file (key_file) or +contained within the rclone config file (key_pem). If using key_pem in +the config file, the entry should be on a single line with new line ('' +or '') separating lines. i.e. + + key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- + +This will generate it correctly for key_pem for use in the config: + + awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa + +If you don't specify pass, key_file, or key_pem or ask_password then +rclone will attempt to contact an ssh-agent. You can also specify +key_use_agent to force the usage of an ssh-agent. In this case key_file +or key_pem can also be specified to force the usage of a specific key in +the ssh-agent. + +Using an ssh-agent is the only way to load encrypted OpenSSH keys at the +moment. + +If you set the ask_password option, rclone will prompt for a password +when needed and no password has been configured. + +Certificate-signed keys + +With traditional key-based authentication, you configure your private +key only, and the public key built into it will be used during the +authentication process. + +If you have a certificate you may use it to sign your public key, +creating a separate SSH user certificate that should be used instead of +the plain public key extracted from the private key. Then you must +provide the path to the user certificate public key file in pubkey_file. + +Note: This is not the traditional public key paired with your private +key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path +in pubkey_file will not work. + +Example: + + [remote] + type = sftp + host = example.com + user = sftpuser + key_file = ~/id_rsa + pubkey_file = ~/id_rsa-cert.pub + +If you concatenate a cert with a private key then you can specify the +merged file in both places. + +Note: the cert must come first in the file. e.g. - ``` cat id_rsa-cert.pub id_rsa > merged_key - ``` - ### Host key validation +Host key validation - By default rclone will not check the server's host key for validation. This - can allow an attacker to replace a server with their own and if you use - password authentication then this can lead to that password being exposed. +By default rclone will not check the server's host key for validation. +This can allow an attacker to replace a server with their own and if you +use password authentication then this can lead to that password being +exposed. - Host key matching, using standard `known_hosts` files can be turned on by - enabling the `known_hosts_file` option. This can point to the file maintained - by `OpenSSH` or can point to a unique file. +Host key matching, using standard known_hosts files can be turned on by +enabling the known_hosts_file option. This can point to the file +maintained by OpenSSH or can point to a unique file. - e.g. using the OpenSSH `known_hosts` file: +e.g. using the OpenSSH known_hosts file: - ``` [remote] type = sftp host = example.com @@ -44010,6 +47005,17 @@ Properties: - Type: bool - Default: false +--sftp-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SFTP_DESCRIPTION +- Type: string +- Required: false + Limitations On some SFTP servers (e.g. Synology) the paths are different for SSH and @@ -44290,6 +47296,17 @@ Properties: - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot +--smb-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SMB_DESCRIPTION +- Type: string +- Required: false + Storj Storj is an encrypted, secure, and cost-effective object storage service @@ -44575,6 +47592,22 @@ Properties: - Type: string - Required: false +Advanced options + +Here are the Advanced options specific to storj (Storj Decentralized +Cloud Storage). + +--storj-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_STORJ_DESCRIPTION +- Type: string +- Required: false + Usage Paths are specified as remote:bucket (or remote: for the lsf command.) @@ -44982,6 +48015,17 @@ Properties: - Type: Encoding - Default: Slash,Ctl,InvalidUtf8,Dot +--sugarsync-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SUGARSYNC_DESCRIPTION +- Type: string +- Required: false + Limitations rclone about is not supported by the SugarSync backend. Backends without @@ -45138,6 +48182,17 @@ Properties: - Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot +--uptobox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_UPTOBOX_DESCRIPTION +- Type: string +- Required: false + Limitations Uptobox will delete inactive files that have not been accessed in 60 @@ -45504,6 +48559,17 @@ Properties: - Type: SizeSuffix - Default: 1Gi +--union-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_UNION_DESCRIPTION +- Type: string +- Required: false + Metadata Any metadata supported by the underlying remote is read and written. @@ -45782,6 +48848,28 @@ Properties: - Type: SizeSuffix - Default: 10Mi +--webdav-owncloud-exclude-shares + +Exclude ownCloud shares + +Properties: + +- Config: owncloud_exclude_shares +- Env Var: RCLONE_WEBDAV_OWNCLOUD_EXCLUDE_SHARES +- Type: bool +- Default: false + +--webdav-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_WEBDAV_DESCRIPTION +- Type: string +- Required: false + Provider notes See below for notes on specific providers. @@ -46165,6 +49253,17 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,InvalidUtf8,Dot +--yandex-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_YANDEX_DESCRIPTION +- Type: string +- Required: false + Limitations When uploading very large files (bigger than about 5 GiB) you will need @@ -46412,6 +49511,17 @@ Properties: - Type: Encoding - Default: Del,Ctl,InvalidUtf8 +--zoho-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_ZOHO_DESCRIPTION +- Type: string +- Required: false + Setting up your own client_id For Zoho we advise you to set up your own client_id. To do so you have @@ -46966,6 +50076,17 @@ Properties: - Type: Encoding - Default: Slash,Dot +--local-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_LOCAL_DESCRIPTION +- Type: string +- Required: false + Metadata Depending on which OS is in use the local backend may return only some @@ -46977,6 +50098,8 @@ pkg/attrs#47). User metadata is stored as extended attributes (which may not be supported by all file systems) under the "user.*" prefix. +Metadata is supported on files and directories. + Here are the possible system metadata items for the local backend. --------------------------------------------------------------------------------------------------- @@ -47039,6 +50162,336 @@ Options: Changelog +v1.66.0 - 2024-03-10 + +See commits + +- Major features + - Rclone will now sync directory modification times if the backend + supports it. + - This can be disabled with --no-update-dir-modtime + - See the overview and look for the D flags in the ModTime + column to see which backends support it. + - Rclone will now sync directory metadata if the backend supports + it when -M/--metadata is in use. + - See the overview and look for the D flags in the Metadata + column to see which backends support it. + - Bisync has received many updates see below for more details or + bisync's changelog +- Removed backends + - amazonclouddrive: Remove Amazon Drive backend code and docs + (Nick Craig-Wood) +- New Features + - backend + - Add description field for all backends (Paul Stern) + - build + - Update to go1.22 and make go1.20 the minimum required + version (Nick Craig-Wood) + - Fix CVE-2024-24786 by upgrading google.golang.org/protobuf + (Nick Craig-Wood) + - check: Respect --no-unicode-normalization and --ignore-case-sync + for --checkfile (nielash) + - cmd: Much improved shell auto completion which reduces the size + of the completion file and works faster (Nick Craig-Wood) + - doc updates (albertony, ben-ba, Eli, emyarod, huajin tong, Jack + Provance, kapitainsky, keongalvin, Nick Craig-Wood, nielash, + rarspace01, rzitzer, Tera, Vincent Murphy) + - fs: Add more detailed logging for file includes/excludes (Kyle + Reynolds) + - lsf + - Add --time-format flag (nielash) + - Make metadata appear for directories (Nick Craig-Wood) + - lsjson: Make metadata appear for directories (Nick Craig-Wood) + - rc + - Add srcFs and dstFs to core/stats and core/transferred stats + (Nick Craig-Wood) + - Add operations/hashsum to the rc as rclone hashsum + equivalent (Nick Craig-Wood) + - Add config/paths to the rc as rclone config paths equivalent + (Nick Craig-Wood) + - sync + - Optionally report list of synced paths to file (nielash) + - Implement directory sync for mod times and metadata (Nick + Craig-Wood) + - Don't set directory modtimes if already set (nielash) + - Don't sync directory modtimes from backends which don't have + directories (Nick Craig-Wood) +- Bug Fixes + - backend + - Make backends which use oauth implement the Shutdown and + shutdown the oauth properly (rkonfj) + - bisync + - Handle unicode and case normalization consistently (nielash) + - Partial uploads known issue on local/ftp/sftp has been + resolved (unless using --inplace) (nielash) + - Fixed handling of unicode normalization and case + insensitivity, support for --fix-case, --ignore-case-sync, + --no-unicode-normalization (nielash) + - Bisync no longer fails to find the correct listing file when + configs are overridden with backend-specific flags. + (nielash) + - nfsmount + - Fix exit after external unmount (nielash) + - Fix --volname being ignored (nielash) + - operations + - Fix renaming a file on macOS (nielash) + - Fix case-insensitive moves in operations.Move (nielash) + - Fix TestCaseInsensitiveMoveFileDryRun on chunker integration + tests (nielash) + - Fix TestMkdirModTime test (Nick Craig-Wood) + - Fix TestSetDirModTime for backends with SetDirModTime but + not Metadata (Nick Craig-Wood) + - Fix typo in log messages (nielash) + - serve nfs: Fix writing files via Finder on macOS (nielash) + - serve restic: Fix error handling (Michael Eischer) + - serve webdav: Fix --baseurl without leading / (Nick Craig-Wood) + - stats: Fix race between ResetCounters and stopAverageLoop called + from time.AfterFunc (Nick Craig-Wood) + - sync + - --fix-case flag to rename case insensitive dest (nielash) + - Use operations.DirMove instead of sync.MoveDir for + --fix-case (nielash) + - systemd: Fix detection and switch to the coreos package + everywhere rather than having 2 separate libraries (Anagh Kumar + Baranwal) +- Mount + - Fix macOS not noticing errors with --daemon (Nick Craig-Wood) + - Notice daemon dying much quicker (Nick Craig-Wood) +- VFS + - Fix unicode normalization on macOS (nielash) +- Bisync + - Copies and deletes are now handled in one operation instead of + two (nielash) + - --track-renames and --backup-dir are now supported (nielash) + - Final listings are now generated from sync results, to avoid + needing to re-list (nielash) + - Bisync is now much more resilient to changes that happen during + a bisync run, and far less prone to critical errors / undetected + changes (nielash) + - Bisync is now capable of rolling a file listing back in cases of + uncertainty, essentially marking the file as needing to be + rechecked next time. (nielash) + - A few basic terminal colors are now supported, controllable with + --color (AUTO|NEVER|ALWAYS) (nielash) + - Initial listing snapshots of Path1 and Path2 are now generated + concurrently, using the same "march" infrastructure as check and + sync, for performance improvements and less risk of error. + (nielash) + - --resync is now much more efficient (especially for users of + --create-empty-src-dirs) (nielash) + - Google Docs (and other files of unknown size) are now supported + (with the same options as in sync) (nielash) + - Equality checks before a sync conflict rename now fall back to + cryptcheck (when possible) or --download, (nielash) instead of + of --size-only, when check is not available. + - Bisync now fully supports comparing based on any combination of + size, modtime, and checksum, lifting the prior restriction on + backends without modtime support. (nielash) + - Bisync now supports a "Graceful Shutdown" mode to cleanly cancel + a run early without requiring --resync. (nielash) + - New --recover flag allows robust recovery in the event of + interruptions, without requiring --resync. (nielash) + - A new --max-lock setting allows lock files to automatically + renew and expire, for better automatic recovery when a run is + interrupted. (nielash) + - Bisync now supports auto-resolving sync conflicts and + customizing rename behavior with new --conflict-resolve, + --conflict-loser, and --conflict-suffix flags. (nielash) + - A new --resync-mode flag allows more control over which version + of a file gets kept during a --resync. (nielash) + - Bisync now supports --retries and --retries-sleep (when + --resilient is set.) (nielash) + - Clarify file operation directions in dry-run logs (Kyle + Reynolds) +- Local + - Fix cleanRootPath on Windows after go1.21.4 stdlib update + (nielash) + - Implement setting modification time on directories (nielash) + - Implement modtime and metadata for directories (Nick Craig-Wood) + - Fix setting of btime on directories on Windows (Nick Craig-Wood) + - Delete backend implementation of Purge to speed up and make + stats (Nick Craig-Wood) + - Support metadata setting and mapping on server side Move (Nick + Craig-Wood) +- Cache + - Implement setting modification time on directories (if supported + by wrapped remote) (nielash) + - Implement setting metadata on directories (Nick Craig-Wood) +- Crypt + - Implement setting modification time on directories (if supported + by wrapped remote) (nielash) + - Implement setting metadata on directories (Nick Craig-Wood) + - Improve handling of undecryptable file names (nielash) + - Add missing error check spotted by linter (Nick Craig-Wood) +- Azure Blob + - Implement --azureblob-delete-snapshots (Nick Craig-Wood) +- B2 + - Clarify exactly what --b2-download-auth-duration does in the + docs (Nick Craig-Wood) +- Chunker + - Implement setting modification time on directories (if supported + by wrapped remote) (nielash) + - Implement setting metadata on directories (Nick Craig-Wood) +- Combine + - Implement setting modification time on directories (if supported + by wrapped remote) (nielash) + - Implement setting metadata on directories (Nick Craig-Wood) + - Fix directory metadata error on upstream root (nielash) + - Fix directory move across upstreams (nielash) +- Compress + - Implement setting modification time on directories (if supported + by wrapped remote) (nielash) + - Implement setting metadata on directories (Nick Craig-Wood) +- Drive + - Implement setting modification time on directories (nielash) + - Implement modtime and metadata setting for directories (Nick + Craig-Wood) + - Support metadata setting and mapping on server side Move,Copy + (Nick Craig-Wood) +- FTP + - Fix mkdir with rsftp which is returning the wrong code (Nick + Craig-Wood) +- Hasher + - Implement setting modification time on directories (if supported + by wrapped remote) (nielash) + - Implement setting metadata on directories (Nick Craig-Wood) + - Fix error from trying to stop an already-stopped db (nielash) + - Look for cached hash if passed hash unexpectedly blank (nielash) +- Imagekit + - Updated docs and web content (Harshit Budhraja) + - Updated overview - supported operations (Harshit Budhraja) +- Mega + - Fix panic with go1.22 (Nick Craig-Wood) +- Netstorage + - Fix Root to return correct directory when pointing to a file + (Nick Craig-Wood) +- Onedrive + - Add metadata support (nielash) +- Opendrive + - Fix moving file/folder within the same parent dir (nielash) +- Oracle Object Storage + - Support backend restore command (Nikhil Ahuja) + - Support workload identity authentication for OKE (Anders + Swanson) +- Protondrive + - Fix encoding of Root method (Nick Craig-Wood) +- Quatrix + - Fix Content-Range header (Volodymyr) + - Add option to skip project folders (Oksana Zhykina) + - Fix Root to return correct directory when pointing to a file + (Nick Craig-Wood) +- S3 + - Add --s3-version-deleted to show delete markers in listings when + using versions. (Nick Craig-Wood) + - Add IPv6 support with option --s3-use-dual-stack (Anthony + Metzidis) + - Copy parts in parallel when doing chunked server side copy (Nick + Craig-Wood) + - GCS provider: fix server side copy of files bigger than 5G (Nick + Craig-Wood) + - Support metadata setting and mapping on server side Copy (Nick + Craig-Wood) +- Seafile + - Fix download/upload error when FILE_SERVER_ROOT is relative + (DanielEgbers) + - Fix Root to return correct directory when pointing to a file + (Nick Craig-Wood) +- SFTP + - Implement setting modification time on directories (nielash) + - Set directory modtimes update on write flag (Nick Craig-Wood) + - Shorten wait delay for external ssh binaries now that we are + using go1.20 (Nick Craig-Wood) +- Swift + - Avoid unnecessary container versioning check (Joe Cai) +- Union + - Implement setting modification time on directories (if supported + by wrapped remote) (nielash) + - Implement setting metadata on directories (Nick Craig-Wood) +- WebDAV + - Reduce priority of chunks upload log (Gabriel Ramos) + - owncloud: Add config owncloud_exclude_shares which allows to + exclude shared files and folders when listing remote resources + (Thomas Müller) + +v1.65.2 - 2024-01-24 + +See commits + +- Bug Fixes + - build: bump github.com/cloudflare/circl from 1.3.6 to 1.3.7 + (dependabot) + - docs updates (Nick Craig-Wood, kapitainsky, nielash, Tera, + Harshit Budhraja) +- VFS + - Fix stale data when using --vfs-cache-mode full (Nick + Craig-Wood) +- Azure Blob + - IMPORTANT Fix data corruption bug - see #7590 (Nick Craig-Wood) + +v1.65.1 - 2024-01-08 + +See commits + +- Bug Fixes + - build + - Bump golang.org/x/crypto to fix ssh terrapin CVE-2023-48795 + (dependabot) + - Update to go1.21.5 to fix Windows path problems (Nick + Craig-Wood) + - Fix docker build on arm/v6 (Nick Craig-Wood) + - install.sh: fix harmless error message on install (Nick + Craig-Wood) + - accounting: fix stats to show server side transfers (Nick + Craig-Wood) + - doc fixes (albertony, ben-ba, Eli Orzitzer, emyarod, keongalvin, + rarspace01) + - nfsmount: Compile for all unix oses, add --sudo and fix + error/option handling (Nick Craig-Wood) + - operations: Fix files moved by rclone move not being counted as + transfers (Nick Craig-Wood) + - oauthutil: Avoid panic when *token and *ts.token are the same + (rkonfj) + - serve s3: Fix listing oddities (Nick Craig-Wood) +- VFS + - Note that --vfs-refresh runs in the background (Nick Craig-Wood) +- Azurefiles + - Fix storage base url (Oksana) +- Crypt + - Fix rclone move a file over itself deleting the file (Nick + Craig-Wood) +- Chunker + - Fix rclone move a file over itself deleting the file (Nick + Craig-Wood) +- Compress + - Fix rclone move a file over itself deleting the file (Nick + Craig-Wood) +- Dropbox + - Fix used space on dropbox team accounts (Nick Craig-Wood) +- FTP + - Fix multi-thread copy (WeidiDeng) +- Googlephotos + - Fix nil pointer exception when batch failed (Nick Craig-Wood) +- Hasher + - Fix rclone move a file over itself deleting the file (Nick + Craig-Wood) + - Fix invalid memory address error when MaxAge == 0 (nielash) +- Onedrive + - Fix error listing: unknown object type (Nick Craig-Wood) + - Fix "unauthenticated: Unauthenticated" errors when uploading + (Nick Craig-Wood) +- Oracleobjectstorage + - Fix object storage endpoint for custom endpoints (Manoj Ghosh) + - Multipart copy create bucket if it doesn't exist. (Manoj Ghosh) +- Protondrive + - Fix CVE-2023-45286 / GHSA-xwh9-gc39-5298 (Nick Craig-Wood) +- S3 + - Fix crash if no UploadId in multipart upload (Nick Craig-Wood) +- Smb + - Fix shares not listed by updating go-smb2 (halms) +- Union + - Fix rclone move a file over itself deleting the file (Nick + Craig-Wood) + v1.65.0 - 2023-11-26 See commits @@ -53501,10 +56954,12 @@ Bugs and Limitations Limitations -Directory timestamps aren't preserved +Directory timestamps aren't preserved on some backends -Rclone doesn't currently preserve the timestamps of directories. This is -because rclone only really considers objects when syncing. +As of v1.66, rclone supports syncing directory modtimes, if the backend +supports it. Some backends do not support it -- see overview for a +complete list. Additionally, note that empty directories are not synced +by default (this can be enabled with --create-empty-src-dirs.) Rclone struggles with millions of files in a directory/bucket @@ -53850,7 +57305,7 @@ email addresses removed from here need to be added to bin/.ignore-emails to make - Scott McGillivray scott.mcgillivray@gmail.com - Bjørn Erik Pedersen bjorn.erik.pedersen@gmail.com - Lukas Loesche lukas@mesosphere.io -- emyarod allllaboutyou@gmail.com +- emyarod emyarod@users.noreply.github.com - T.C. Ferguson tcf909@gmail.com - Brandur brandur@mutelight.org - Dario Giovannetti dev@dariogiovannetti.net @@ -54607,6 +58062,27 @@ email addresses removed from here need to be added to bin/.ignore-emails to make - Alen Šiljak dev@alensiljak.eu.org - 你知道未来吗 rkonfj@gmail.com - Abhinav Dhiman 8640877+ahnv@users.noreply.github.com +- halms 7513146+halms@users.noreply.github.com +- ben-ba benjamin.brauner@gmx.de +- Eli Orzitzer e_orz@yahoo.com +- Anthony Metzidis anthony.metzidis@gmail.com +- emyarod afw5059@gmail.com +- keongalvin keongalvin@gmail.com +- rarspace01 rarspace01@users.noreply.github.com +- Paul Stern paulstern45@gmail.com +- Nikhil Ahuja nikhilahuja@live.com +- Harshit Budhraja 52413945+harshit-budhraja@users.noreply.github.com +- Tera 24725862+teraa@users.noreply.github.com +- Kyle Reynolds kylereynoldsdev@gmail.com +- Michael Eischer michael.eischer@gmx.de +- Thomas Müller 1005065+DeepDiver1975@users.noreply.github.com +- DanielEgbers 27849724+DanielEgbers@users.noreply.github.com +- Jack Provance 49460795+njprov@users.noreply.github.com +- Gabriel Ramos 109390599+gabrielramos02@users.noreply.github.com +- Dan McArdle d@nmcardle.com +- Joe Cai joe.cai@bigcommerce.com +- Anders Swanson anders.swanson@oracle.com +- huajin tong 137764712+thirdkeyword@users.noreply.github.com Contact the rclone project diff --git a/bin/use-deadlock-detector b/bin/use-deadlock-detector new file mode 100755 index 000000000..ca56026b8 --- /dev/null +++ b/bin/use-deadlock-detector @@ -0,0 +1,14 @@ +#!/bin/bash + +if [[ ! -z $(git status --short --untracked-files=no) ]]; then + echo "Detected uncommitted changes - commit before running this" + exit 1 +fi + +echo "Installing deadlock detector - use 'git reset --hard HEAD' to undo" + +go get -v github.com/sasha-s/go-deadlock/... +find . -type f -name "*.go" -print0 | xargs -0 sed -i~ 's/sync.RWMutex/deadlock.RWMutex/; s/sync.Mutex/deadlock.Mutex/;' +goimports -w . + +echo "Done" diff --git a/docs/content/alias.md b/docs/content/alias.md index a0960c528..cc419bf30 100644 --- a/docs/content/alias.md +++ b/docs/content/alias.md @@ -112,4 +112,19 @@ Properties: - Type: string - Required: true +### Advanced options + +Here are the Advanced options specific to alias (Alias for an existing remote). + +#### --alias-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_ALIAS_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md index 0e4bfd7f2..883dfeb15 100644 --- a/docs/content/azureblob.md +++ b/docs/content/azureblob.md @@ -831,6 +831,35 @@ Properties: - Type: bool - Default: false +#### --azureblob-delete-snapshots + +Set to specify how to deal with snapshots on blob deletion. + +Properties: + +- Config: delete_snapshots +- Env Var: RCLONE_AZUREBLOB_DELETE_SNAPSHOTS +- Type: string +- Required: false +- Choices: + - "" + - By default, the delete operation fails if a blob has snapshots + - "include" + - Specify 'include' to remove the root blob and all its snapshots + - "only" + - Specify 'only' to remove only the snapshots but keep the root blob. + +#### --azureblob-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_AZUREBLOB_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ### Custom upload headers diff --git a/docs/content/azurefiles.md b/docs/content/azurefiles.md index a6e84f2e0..4eca4d68f 100644 --- a/docs/content/azurefiles.md +++ b/docs/content/azurefiles.md @@ -687,6 +687,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot +#### --azurefiles-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_AZUREFILES_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ### Custom upload headers diff --git a/docs/content/b2.md b/docs/content/b2.md index 7f822b42f..5d62bc892 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -554,9 +554,12 @@ Properties: #### --b2-download-auth-duration -Time before the authorization token will expire in s or suffix ms|s|m|h|d. +Time before the public link authorization token will expire in s or suffix ms|s|m|h|d. + +This is used in combination with "rclone link" for making files +accessible to the public and sets the duration before the download +authorization token will expire. -The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. Properties: @@ -632,6 +635,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --b2-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_B2_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the b2 backend. diff --git a/docs/content/box.md b/docs/content/box.md index 9e35c5c4f..b4f9b6ed7 100644 --- a/docs/content/box.md +++ b/docs/content/box.md @@ -473,6 +473,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot +#### --box-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_BOX_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/cache.md b/docs/content/cache.md index 9b787366e..f245634c6 100644 --- a/docs/content/cache.md +++ b/docs/content/cache.md @@ -664,6 +664,17 @@ Properties: - Type: Duration - Default: 1s +#### --cache-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CACHE_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the cache backend. diff --git a/docs/content/changelog.md b/docs/content/changelog.md index 9404400b6..3a1eeb23d 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -5,6 +5,176 @@ description: "Rclone Changelog" # Changelog +## v1.66.0 - 2024-03-10 + +[See commits](https://github.com/rclone/rclone/compare/v1.65.0...v1.66.0) + +* Major features + * Rclone will now sync directory modification times if the backend supports it. + * This can be disabled with [--no-update-dir-modtime](/docs/#no-update-dir-modtime) + * See [the overview](/overview/#features) and look for the `D` flags in the `ModTime` column to see which backends support it. + * Rclone will now sync directory metadata if the backend supports it when `-M`/`--metadata` is in use. + * See [the overview](/overview/#features) and look for the `D` flags in the `Metadata` column to see which backends support it. + * Bisync has received many updates see below for more details or [bisync's changelog](/bisync/#changelog) +* Removed backends + * amazonclouddrive: Remove Amazon Drive backend code and docs (Nick Craig-Wood) +* New Features + * backend + * Add description field for all backends (Paul Stern) + * build + * Update to go1.22 and make go1.20 the minimum required version (Nick Craig-Wood) + * Fix `CVE-2024-24786` by upgrading `google.golang.org/protobuf` (Nick Craig-Wood) + * check: Respect `--no-unicode-normalization` and `--ignore-case-sync` for `--checkfile` (nielash) + * cmd: Much improved shell auto completion which reduces the size of the completion file and works faster (Nick Craig-Wood) + * doc updates (albertony, ben-ba, Eli, emyarod, huajin tong, Jack Provance, kapitainsky, keongalvin, Nick Craig-Wood, nielash, rarspace01, rzitzer, Tera, Vincent Murphy) + * fs: Add more detailed logging for file includes/excludes (Kyle Reynolds) + * lsf + * Add `--time-format` flag (nielash) + * Make metadata appear for directories (Nick Craig-Wood) + * lsjson: Make metadata appear for directories (Nick Craig-Wood) + * rc + * Add `srcFs` and `dstFs` to `core/stats` and `core/transferred` stats (Nick Craig-Wood) + * Add `operations/hashsum` to the rc as `rclone hashsum` equivalent (Nick Craig-Wood) + * Add `config/paths` to the rc as `rclone config paths` equivalent (Nick Craig-Wood) + * sync + * Optionally report list of synced paths to file (nielash) + * Implement directory sync for mod times and metadata (Nick Craig-Wood) + * Don't set directory modtimes if already set (nielash) + * Don't sync directory modtimes from backends which don't have directories (Nick Craig-Wood) +* Bug Fixes + * backend + * Make backends which use oauth implement the `Shutdown` and shutdown the oauth properly (rkonfj) + * bisync + * Handle unicode and case normalization consistently (nielash) + * Partial uploads known issue on `local`/`ftp`/`sftp` has been resolved (unless using `--inplace`) (nielash) + * Fixed handling of unicode normalization and case insensitivity, support for [`--fix-case`](/docs/#fix-case), [`--ignore-case-sync`](/docs/#ignore-case-sync), [`--no-unicode-normalization`](/docs/#no-unicode-normalization) (nielash) + * Bisync no longer fails to find the correct listing file when configs are overridden with backend-specific flags. (nielash) + * nfsmount + * Fix exit after external unmount (nielash) + * Fix `--volname` being ignored (nielash) + * operations + * Fix renaming a file on macOS (nielash) + * Fix case-insensitive moves in operations.Move (nielash) + * Fix TestCaseInsensitiveMoveFileDryRun on chunker integration tests (nielash) + * Fix TestMkdirModTime test (Nick Craig-Wood) + * Fix TestSetDirModTime for backends with SetDirModTime but not Metadata (Nick Craig-Wood) + * Fix typo in log messages (nielash) + * serve nfs: Fix writing files via Finder on macOS (nielash) + * serve restic: Fix error handling (Michael Eischer) + * serve webdav: Fix `--baseurl` without leading / (Nick Craig-Wood) + * stats: Fix race between ResetCounters and stopAverageLoop called from time.AfterFunc (Nick Craig-Wood) + * sync + * `--fix-case` flag to rename case insensitive dest (nielash) + * Use operations.DirMove instead of sync.MoveDir for `--fix-case` (nielash) + * systemd: Fix detection and switch to the coreos package everywhere rather than having 2 separate libraries (Anagh Kumar Baranwal) +* Mount + * Fix macOS not noticing errors with `--daemon` (Nick Craig-Wood) + * Notice daemon dying much quicker (Nick Craig-Wood) +* VFS + * Fix unicode normalization on macOS (nielash) +* Bisync + * Copies and deletes are now handled in one operation instead of two (nielash) + * `--track-renames` and `--backup-dir` are now supported (nielash) + * Final listings are now generated from sync results, to avoid needing to re-list (nielash) + * Bisync is now much more resilient to changes that happen during a bisync run, and far less prone to critical errors / undetected changes (nielash) + * Bisync is now capable of rolling a file listing back in cases of uncertainty, essentially marking the file as needing to be rechecked next time. (nielash) + * A few basic terminal colors are now supported, controllable with [`--color`](/docs/#color-when) (`AUTO`|`NEVER`|`ALWAYS`) (nielash) + * Initial listing snapshots of Path1 and Path2 are now generated concurrently, using the same "march" infrastructure as `check` and `sync`, for performance improvements and less risk of error. (nielash) + * `--resync` is now much more efficient (especially for users of `--create-empty-src-dirs`) (nielash) + * Google Docs (and other files of unknown size) are now supported (with the same options as in `sync`) (nielash) + * Equality checks before a sync conflict rename now fall back to `cryptcheck` (when possible) or `--download`, (nielash) +instead of of `--size-only`, when `check` is not available. + * Bisync now fully supports comparing based on any combination of size, modtime, and checksum, lifting the prior restriction on backends without modtime support. (nielash) + * Bisync now supports a "Graceful Shutdown" mode to cleanly cancel a run early without requiring `--resync`. (nielash) + * New `--recover` flag allows robust recovery in the event of interruptions, without requiring `--resync`. (nielash) + * A new `--max-lock` setting allows lock files to automatically renew and expire, for better automatic recovery when a run is interrupted. (nielash) + * Bisync now supports auto-resolving sync conflicts and customizing rename behavior with new [`--conflict-resolve`](#conflict-resolve), [`--conflict-loser`](#conflict-loser), and [`--conflict-suffix`](#conflict-suffix) flags. (nielash) + * A new [`--resync-mode`](#resync-mode) flag allows more control over which version of a file gets kept during a `--resync`. (nielash) + * Bisync now supports [`--retries`](/docs/#retries-int) and [`--retries-sleep`](/docs/#retries-sleep-time) (when [`--resilient`](#resilient) is set.) (nielash) + * Clarify file operation directions in dry-run logs (Kyle Reynolds) +* Local + * Fix cleanRootPath on Windows after go1.21.4 stdlib update (nielash) + * Implement setting modification time on directories (nielash) + * Implement modtime and metadata for directories (Nick Craig-Wood) + * Fix setting of btime on directories on Windows (Nick Craig-Wood) + * Delete backend implementation of Purge to speed up and make stats (Nick Craig-Wood) + * Support metadata setting and mapping on server side Move (Nick Craig-Wood) +* Cache + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) +* Crypt + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) + * Improve handling of undecryptable file names (nielash) + * Add missing error check spotted by linter (Nick Craig-Wood) +* Azure Blob + * Implement `--azureblob-delete-snapshots` (Nick Craig-Wood) +* B2 + * Clarify exactly what `--b2-download-auth-duration` does in the docs (Nick Craig-Wood) +* Chunker + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) +* Combine + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) + * Fix directory metadata error on upstream root (nielash) + * Fix directory move across upstreams (nielash) +* Compress + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) +* Drive + * Implement setting modification time on directories (nielash) + * Implement modtime and metadata setting for directories (Nick Craig-Wood) + * Support metadata setting and mapping on server side Move,Copy (Nick Craig-Wood) +* FTP + * Fix mkdir with rsftp which is returning the wrong code (Nick Craig-Wood) +* Hasher + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) + * Fix error from trying to stop an already-stopped db (nielash) + * Look for cached hash if passed hash unexpectedly blank (nielash) +* Imagekit + * Updated docs and web content (Harshit Budhraja) + * Updated overview - supported operations (Harshit Budhraja) +* Mega + * Fix panic with go1.22 (Nick Craig-Wood) +* Netstorage + * Fix Root to return correct directory when pointing to a file (Nick Craig-Wood) +* Onedrive + * Add metadata support (nielash) +* Opendrive + * Fix moving file/folder within the same parent dir (nielash) +* Oracle Object Storage + * Support `backend restore` command (Nikhil Ahuja) + * Support workload identity authentication for OKE (Anders Swanson) +* Protondrive + * Fix encoding of Root method (Nick Craig-Wood) +* Quatrix + * Fix `Content-Range` header (Volodymyr) + * Add option to skip project folders (Oksana Zhykina) + * Fix Root to return correct directory when pointing to a file (Nick Craig-Wood) +* S3 + * Add `--s3-version-deleted` to show delete markers in listings when using versions. (Nick Craig-Wood) + * Add IPv6 support with option `--s3-use-dual-stack` (Anthony Metzidis) + * Copy parts in parallel when doing chunked server side copy (Nick Craig-Wood) + * GCS provider: fix server side copy of files bigger than 5G (Nick Craig-Wood) + * Support metadata setting and mapping on server side Copy (Nick Craig-Wood) +* Seafile + * Fix download/upload error when `FILE_SERVER_ROOT` is relative (DanielEgbers) + * Fix Root to return correct directory when pointing to a file (Nick Craig-Wood) +* SFTP + * Implement setting modification time on directories (nielash) + * Set directory modtimes update on write flag (Nick Craig-Wood) + * Shorten wait delay for external ssh binaries now that we are using go1.20 (Nick Craig-Wood) +* Swift + * Avoid unnecessary container versioning check (Joe Cai) +* Union + * Implement setting modification time on directories (if supported by wrapped remote) (nielash) + * Implement setting metadata on directories (Nick Craig-Wood) +* WebDAV + * Reduce priority of chunks upload log (Gabriel Ramos) + * owncloud: Add config `owncloud_exclude_shares` which allows to exclude shared files and folders when listing remote resources (Thomas Müller) + ## v1.65.2 - 2024-01-24 [See commits](https://github.com/rclone/rclone/compare/v1.65.1...v1.65.2) @@ -3435,7 +3605,7 @@ all the docs and Edward Barker for helping re-write the front page. * this is for building web native GUIs for rclone * Optionally serving objects on the rc http server * Ensure rclone fails to start up if the `--rc` port is in use already - * See [the rc docs](https://rclone.org/rc/) for more info + * See [the rc docs](/rc/) for more info * sync/copy/move * Make `--files-from` only read the objects specified and don't scan directories (Nick Craig-Wood) * This is a huge speed improvement for destinations with lots of files diff --git a/docs/content/chunker.md b/docs/content/chunker.md index e7011dc08..722b31078 100644 --- a/docs/content/chunker.md +++ b/docs/content/chunker.md @@ -477,4 +477,15 @@ Properties: - If meta format is set to "none", rename transactions will always be used. - This method is EXPERIMENTAL, don't use on production systems. +#### --chunker-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CHUNKER_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} diff --git a/docs/content/combine.md b/docs/content/combine.md index 5fa17eb44..9c4aafd5c 100644 --- a/docs/content/combine.md +++ b/docs/content/combine.md @@ -154,6 +154,21 @@ Properties: - Type: SpaceSepList - Default: +### Advanced options + +Here are the Advanced options specific to combine (Combine several remotes into one). + +#### --combine-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_COMBINE_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index e1489389d..cc664c426 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -27,14 +27,7 @@ rclone [flags] ### Options ``` - --acd-auth-url string Auth server URL - --acd-client-id string OAuth Client Id - --acd-client-secret string OAuth Client Secret - --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi) - --acd-token string OAuth Access Token as a JSON blob - --acd-token-url string Token server url - --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s) + --alias-description string Description of the remote --alias-remote string Remote or path to alias --ask-password Allow prompt for password for encrypted configuration (default true) --auto-confirm If enabled, do not request console confirmation @@ -47,6 +40,8 @@ rclone [flags] --azureblob-client-id string The ID of the client in use --azureblob-client-secret string One of the service principal's client secrets --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth + --azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion + --azureblob-description string Description of the remote --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-disable-checksum Don't store MD5 checksum with object metadata --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) @@ -77,6 +72,7 @@ rclone [flags] --azurefiles-client-secret string One of the service principal's client secrets --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth --azurefiles-connection-string string Azure Files Connection String + --azurefiles-description string Description of the remote --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot) --azurefiles-endpoint string Endpoint for the service --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI) @@ -96,8 +92,9 @@ rclone [flags] --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) + --b2-description string Description of the remote --b2-disable-checksum Disable checksums for large (> upload cutoff) files - --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w) + --b2-download-auth-duration Duration Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-url string Custom endpoint for downloads --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service @@ -118,6 +115,7 @@ rclone [flags] --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) + --box-description string Description of the remote --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) @@ -138,6 +136,7 @@ rclone [flags] --cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend") --cache-db-purge Clear all the cached data for this remote on start --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-description string Description of the remote --cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone") --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server @@ -155,14 +154,17 @@ rclone [flags] --checkers int Number of checkers to run in parallel (default 8) -c, --checksum Check for changes with size & checksum (if available, or fallback to size only). --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi) + --chunker-description string Description of the remote --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk --client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO) + --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining --compare-dest stringArray Include additional comma separated server-side paths during comparison + --compress-description string Description of the remote --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) @@ -172,6 +174,7 @@ rclone [flags] --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination -L, --copy-links Follow symlinks and copy the pointed to item --cpuprofile string Write cpu profile to file + --crypt-description string Description of the remote --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true) --crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32") --crypt-filename-encryption string How to encrypt the filenames (default "standard") @@ -182,6 +185,7 @@ rclone [flags] --crypt-remote string Remote to encrypt/decrypt --crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead --crypt-show-mapping For all files listed show how the names encrypt + --crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted --crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin") --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD) --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) @@ -200,6 +204,7 @@ rclone [flags] --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut + --drive-description string Description of the remote --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) @@ -248,6 +253,7 @@ rclone [flags] --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret + --dropbox-description string Description of the remote --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) @@ -268,10 +274,12 @@ rclone [flags] --fast-list Use recursive list if available; uses more memory but fewer transactions --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-cdn Set if you wish to use CDN download links + --fichier-description string Description of the remote --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-shared-folder string If you want to download a shared folder, add this parameter + --filefabric-description string Description of the remote --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-permanent-token string Permanent Authentication Token --filefabric-root-folder-id string ID of the root folder @@ -283,11 +291,13 @@ rclone [flags] --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) -f, --filter stringArray Add a file filtering rule --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --fix-case Force rename of case insensitive dest to match source --fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s) --fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s) --ftp-ask-password Allow asking for FTP password when needed --ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s) --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited + --ftp-description string Description of the remote --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) @@ -313,6 +323,7 @@ rclone [flags] --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-decompress If set this will decompress gzip encoded objects + --gcs-description string Description of the remote --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-endpoint string Endpoint for the service @@ -333,6 +344,7 @@ rclone [flags] --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s) --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret + --gphotos-description string Description of the remote --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-include-archived Also view and download archived media --gphotos-read-only Set to make the Google Photos backend read only @@ -341,10 +353,12 @@ rclone [flags] --gphotos-token string OAuth Access Token as a JSON blob --gphotos-token-url string Token server url --hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default) + --hasher-description string Description of the remote --hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy + --hdfs-description string Description of the remote --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-namenode CommaSepList Hadoop name nodes and ports --hdfs-service-principal-name string Kerberos service principal name for the namenode @@ -357,6 +371,7 @@ rclone [flags] --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) --hidrive-client-id string OAuth Client Id --hidrive-client-secret string OAuth Client Secret + --hidrive-description string Description of the remote --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot) --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") @@ -367,6 +382,7 @@ rclone [flags] --hidrive-token-url string Token server url --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) + --http-description string Description of the remote --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / @@ -378,7 +394,8 @@ rclone [flags] --ignore-errors Delete even if there are I/O errors --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally + --imagekit-description string Description of the remote --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket) --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true @@ -392,6 +409,7 @@ rclone [flags] --inplace Download directly to destination file instead of atomic download to temp/rename -i, --interactive Enable interactive mode --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-description string Description of the remote --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") @@ -401,6 +419,7 @@ rclone [flags] --jottacloud-auth-url string Auth server URL --jottacloud-client-id string OAuth Client Id --jottacloud-client-secret string OAuth Client Secret + --jottacloud-description string Description of the remote --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -409,6 +428,7 @@ rclone [flags] --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) + --koofr-description string Description of the remote --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use --koofr-mountid string Mount ID of the mount to use @@ -417,10 +437,12 @@ rclone [flags] --koofr-setmtime Does the backend support setting modification time (default true) --koofr-user string Your user name --kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s) + --linkbox-description string Description of the remote --linkbox-token string Token from https://www.linkbox.to/admin/account -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive + --local-description string Description of the remote --local-encoding Encoding The encoding for the backend (default Slash,Dot) --local-no-check-updated Don't check to see if the files change during upload --local-no-preallocate Disable preallocation of disk space for transferred files @@ -438,6 +460,7 @@ rclone [flags] --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-client-id string OAuth Client Id --mailru-client-secret string OAuth Client Secret + --mailru-description string Description of the remote --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) @@ -457,11 +480,13 @@ rclone [flags] --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) --mega-debug Output more debug from Mega + --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash --mega-pass string Password (obscured) --mega-use-https Use HTTPS for transfers --mega-user string User name + --memory-description string Description of the remote --memprofile string Write memory profile to file -M, --metadata If set, preserve metadata when copying objects --metadata-exclude stringArray Exclude metadatas matching pattern @@ -480,6 +505,7 @@ rclone [flags] --multi-thread-streams int Number of streams to use for multi-thread downloads (default 4) --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --netstorage-account string Set the NetStorage account name + --netstorage-description string Description of the remote --netstorage-host string Domain+path of NetStorage host to connect to --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) @@ -489,6 +515,7 @@ rclone [flags] --no-gzip-encoding Don't set Accept-Encoding: gzip --no-traverse Don't traverse destination file system on copy --no-unicode-normalization Don't normalize unicode characters in filenames + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only) --onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access) @@ -498,6 +525,7 @@ rclone [flags] --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-delta If set rclone will use delta listing to implement recursive listings + --onedrive-description string Description of the remote --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -507,6 +535,7 @@ rclone [flags] --onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous") --onedrive-link-type string Set the type of the links created by the link command (default "view") --onedrive-list-chunk int Size of listing chunk (default 1000) + --onedrive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off) --onedrive-no-versions Remove all versions on modifying operations --onedrive-region string Choose national cloud region for OneDrive (default "global") --onedrive-root-folder-id string ID of the root folder @@ -520,6 +549,7 @@ rclone [flags] --oos-config-profile string Profile name inside the oci config file (default "Default") --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-timeout Duration Timeout for copy (default 1m0s) + --oos-description string Description of the remote --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API @@ -538,6 +568,7 @@ rclone [flags] --oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) + --opendrive-description string Description of the remote --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password (obscured) --opendrive-username string Username @@ -547,6 +578,7 @@ rclone [flags] --pcloud-auth-url string Auth server URL --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret + --pcloud-description string Description of the remote --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") --pcloud-password string Your pcloud password (obscured) @@ -557,6 +589,7 @@ rclone [flags] --pikpak-auth-url string Auth server URL --pikpak-client-id string OAuth Client Id --pikpak-client-secret string OAuth Client Secret + --pikpak-description string Description of the remote --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-pass string Pikpak password (obscured) @@ -569,6 +602,7 @@ rclone [flags] --premiumizeme-auth-url string Auth server URL --premiumizeme-client-id string OAuth Client Id --premiumizeme-client-secret string OAuth Client Secret + --premiumizeme-description string Description of the remote --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token-url string Token server url @@ -576,6 +610,7 @@ rclone [flags] --progress-terminal-title Show progress on the terminal title (requires -P/--progress) --protondrive-2fa string The 2FA code --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") + --protondrive-description string Description of the remote --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) @@ -586,12 +621,14 @@ rclone [flags] --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret + --putio-description string Description of the remote --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-token string OAuth Access Token as a JSON blob --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) + --qingstor-description string Description of the remote --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-env-auth Get QingStor credentials from runtime @@ -600,12 +637,14 @@ rclone [flags] --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to --quatrix-api-key string API key for accessing Quatrix account + --quatrix-description string Description of the remote --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-host string Host name of Quatrix account --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) + --quatrix-skip-project-folders Skip project folders in operations -q, --quiet Print as little stuff as possible --rc Enable the remote control server --rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572]) @@ -644,6 +683,7 @@ rclone [flags] --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --s3-decompress If set this will decompress gzip encoded objects + --s3-description string Description of the remote --s3-directory-markers Upload an empty object with a trailing slash when a new directory is created --s3-disable-checksum Don't store MD5 checksum with object metadata --s3-disable-http2 Disable usage of http2 for S3 backends @@ -678,19 +718,22 @@ rclone [flags] --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key --s3-storage-class string The storage class to use when storing new objects in S3 --s3-sts-endpoint string Endpoint for STS - --s3-upload-concurrency int Concurrency for multipart uploads (default 4) + --s3-upload-concurrency int Concurrency for multipart uploads and copies (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) + --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --s3-version-at Time Show file versions as they were at the specified time (default off) + --s3-version-deleted Show deleted file markers when using versions --s3-versions Include old versions in directory listings --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist + --seafile-description string Description of the remote --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library --seafile-library-key string Library password (for encrypted libraries only) (obscured) @@ -703,6 +746,7 @@ rclone [flags] --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-copy-is-hardlink Set to enable server side copies using hardlinks + --sftp-description string Description of the remote --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -737,6 +781,7 @@ rclone [flags] --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-client-id string OAuth Client Id --sharefile-client-secret string OAuth Client Secret + --sharefile-description string Description of the remote --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder @@ -745,11 +790,13 @@ rclone [flags] --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") + --sia-description string Description of the remote --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --size-only Skip based on size only, not modtime or checksum --skip-links Don't warn about skipped symlinks --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) + --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true) @@ -768,6 +815,7 @@ rclone [flags] --stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes") --storj-access-grant string Access grant --storj-api-key string API key + --storj-description string Description of the remote --storj-passphrase string Encryption passphrase --storj-provider string Choose an authentication method (default "existing") --storj-satellite-address string Satellite address (default "us1.storj.io") @@ -779,6 +827,7 @@ rclone [flags] --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id + --sugarsync-description string Description of the remote --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key @@ -792,6 +841,7 @@ rclone [flags] --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) + --swift-description string Description of the remote --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") @@ -820,25 +870,29 @@ rclone [flags] --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") + --union-description string Description of the remote --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams -u, --update Skip files that are newer on the destination --uptobox-access-token string Your access token + --uptobox-description string Description of the remote --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-private Set to make uploaded files private --use-cookies Enable session cookiejar --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0") -v, --verbose count Print lots more stuff (repeat for more) -V, --version Print the version number --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token + --webdav-description string Description of the remote --webdav-encoding string The encoding for the backend --webdav-headers CommaSepList Set HTTP headers for all transactions --webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi) + --webdav-owncloud-exclude-shares Exclude ownCloud shares --webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to @@ -847,6 +901,7 @@ rclone [flags] --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret + --yandex-description string Description of the remote --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-token string OAuth Access Token as a JSON blob @@ -854,6 +909,7 @@ rclone [flags] --zoho-auth-url string Auth server URL --zoho-client-id string OAuth Client Id --zoho-client-secret string OAuth Client Secret + --zoho-description string Description of the remote --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-region string Zoho region to connect to --zoho-token string OAuth Access Token as a JSON blob @@ -874,7 +930,7 @@ rclone [flags] * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files. * [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files. -* [rclone copyurl](/commands/rclone_copyurl/) - Copy url content to dest. +* [rclone copyurl](/commands/rclone_copyurl/) - Copy the contents of the URL supplied content to dest:path. * [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of an encrypted remote. * [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names. * [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate filenames and delete/rename them. @@ -895,6 +951,7 @@ rclone [flags] * [rclone move](/commands/rclone_move/) - Move files from source to dest. * [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest. * [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface. +* [rclone nfsmount](/commands/rclone_nfsmount/) - Mount the remote as file system on a mountpoint. * [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone config file. * [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents. * [rclone rc](/commands/rclone_rc/) - Run a command against a running rclone. diff --git a/docs/content/commands/rclone_bisync.md b/docs/content/commands/rclone_bisync.md index 057e31848..bd0578ef6 100644 --- a/docs/content/commands/rclone_bisync.md +++ b/docs/content/commands/rclone_bisync.md @@ -4,6 +4,7 @@ description: "Perform bidirectional synchronization between two paths." slug: rclone_bisync url: /commands/rclone_bisync/ groups: Filter,Copy,Important +status: Beta versionIntroduced: v1.58 # autogenerated - DO NOT EDIT, instead edit the source code in cmd/bisync/ and as part of making a release run "make commanddocs" --- @@ -23,6 +24,11 @@ On each successive run it will: Changes include `New`, `Newer`, `Older`, and `Deleted` files. - Propagate changes on Path1 to Path2, and vice-versa. +Bisync is **in beta** and is considered an **advanced command**, so use with care. +Make sure you have read and understood the entire [manual](https://rclone.org/bisync) +(especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using, +or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/). + See [full bisync description](https://rclone.org/bisync/) for details. @@ -33,20 +39,31 @@ rclone bisync remote1:path1 remote2:path2 [flags] ## Options ``` - --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. - --check-filename string Filename for --check-access (default: RCLONE_TEST) - --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") - --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) - --filters-file string Read filtering patterns from a file - --force Bypass --max-delete safety check and run the sync. Consider using with --verbose - -h, --help help for bisync - --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --remove-empty-dirs Remove ALL empty directories at the final cleanup step. - --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! - -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first. - --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) + --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote. + --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote. + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true") + --compare string Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime') + --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num) + --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none") + --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict') + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --no-slow-hash Ignore listing checksums only on backends where they are slow + --recover Automatically recover from interruptions without requiring --resync. + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first. + --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none") + --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls. + --workdir string Use custom working dir - useful for testing. (default: {WORKDIR}) ``` @@ -64,7 +81,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -78,6 +95,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index 315dc4b8a..75eb5af4b 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -65,6 +65,15 @@ recently very efficiently like this: rclone copy --max-age 24h --no-traverse /path/to/src remote: + +Rclone will sync the modification times of files and directories if +the backend supports it. If metadata syncing is required then use the +`--metadata` flag. + +Note that the modification time and metadata for the root directory +will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +for more info. + **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. **Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without copying anything. @@ -96,7 +105,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -110,6 +119,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index 30a42e53f..204681598 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -68,7 +68,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -82,6 +82,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md index caeea9de2..a6ac94a10 100644 --- a/docs/content/commands/rclone_copyurl.md +++ b/docs/content/commands/rclone_copyurl.md @@ -1,6 +1,6 @@ --- title: "rclone copyurl" -description: "Copy url content to dest." +description: "Copy the contents of the URL supplied content to dest:path." slug: rclone_copyurl url: /commands/rclone_copyurl/ groups: Important @@ -9,7 +9,7 @@ versionIntroduced: v1.43 --- # rclone copyurl -Copy url content to dest. +Copy the contents of the URL supplied content to dest:path. ## Synopsis @@ -17,11 +17,14 @@ Copy url content to dest. Download a URL's content and copy it to the destination without saving it in temporary storage. -Setting `--auto-filename` will attempt to automatically determine the filename from the URL -(after any redirections) and used in the destination path. -With `--auto-filename-header` in -addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. -With `--print-filename` in addition, the resulting file name will be printed. +Setting `--auto-filename` will attempt to automatically determine the +filename from the URL (after any redirections) and used in the +destination path. + +With `--auto-filename-header` in addition, if a specific filename is +set in HTTP headers, it will be used instead of the name from the URL. +With `--print-filename` in addition, the resulting file name will be +printed. Setting `--no-clobber` will prevent overwriting file on the destination if there is one with the same name. @@ -29,6 +32,17 @@ destination if there is one with the same name. Setting `--stdout` or making the output file name `-` will cause the output to be written to standard output. +## Troublshooting + +If you can't get `rclone copyurl` to work then here are some things you can try: + +- `--disable-http2` rclone will use HTTP2 if available - try disabling it +- `--bind 0.0.0.0` rclone will use IPv6 if available - try disabling it +- `--bind ::0` to disable IPv4 +- `--user agent curl` - some sites have whitelists for curl's user-agent - try that +- Make sure the site works with `curl` directly + + ``` rclone copyurl https://example.com dest:path [flags] diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md index d943182bb..a9606cfc5 100644 --- a/docs/content/commands/rclone_listremotes.md +++ b/docs/content/commands/rclone_listremotes.md @@ -15,7 +15,7 @@ List all the remotes in the config file and defined in environment variables. rclone listremotes lists all the available remotes from the config file. -When used with the `--long` flag it lists the types too. +When used with the `--long` flag it lists the types and the descriptions too. ``` @@ -26,7 +26,7 @@ rclone listremotes [flags] ``` -h, --help help for listremotes - --long Show the type as well as names + --long Show the type and the description as well as names ``` diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md index c45322182..7973b636a 100644 --- a/docs/content/commands/rclone_lsf.md +++ b/docs/content/commands/rclone_lsf.md @@ -109,6 +109,19 @@ those only (without traversing the whole directory structure): rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone copy --files-from-raw new_files /path/to/local remote:path +The default time format is `'2006-01-02 15:04:05'`. +[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with the `--time-format` flag. +Examples: + + rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)' + rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000' + rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00' + rclone lsf remote:path --format pt --time-format RFC3339 + rclone lsf remote:path --format pt --time-format DateOnly + rclone lsf remote:path --format pt --time-format max +`--time-format max` will automatically truncate '`2006-01-02 15:04:05.000000000`' +to the maximum precision supported by the remote. + Any of the filtering options can be applied to this command. @@ -140,16 +153,17 @@ rclone lsf remote:path [flags] ## Options ``` - --absolute Put a leading / in front of path names - --csv Output in CSV format - -d, --dir-slash Append a slash to directory names (default true) - --dirs-only Only list directories - --files-only Only list files - -F, --format string Output format - see help for details (default "p") - --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") - -h, --help help for lsf - -R, --recursive Recurse into the listing - -s, --separator string Separator for the items in the format (default ";") + --absolute Put a leading / in front of path names + --csv Output in CSV format + -d, --dir-slash Append a slash to directory names (default true) + --dirs-only Only list directories + --files-only Only list files + -F, --format string Output format - see help for details (default "p") + --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") + -h, --help help for lsf + -R, --recursive Recurse into the listing + -s, --separator string Separator for the items in the format (default ";") + -t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) ``` diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index c22fb2e7e..b7e3c5f11 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -272,12 +272,21 @@ Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_ FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server. -# NFS mount +#### Unicode Normalization + +It is highly recommended to keep the default of `--no-unicode-normalization=false` +for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity). + +### NFS mount This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) command and mounts it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to send SIGTERM signal to the rclone process using |kill| command to stop the mount. +Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler. +This should not be set too low or you may experience errors when trying to access files. The default is 1000000, +but consider lowering this limit if the server's system resource usage causes problems. + ### macFUSE Notes If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from @@ -305,15 +314,6 @@ As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats): This means that viewing files with various tools, notably macOS Finder, will cause rlcone to update the modification time of the file. This may make rclone upload a full new copy of the file. - -#### Unicode Normalization - -Rclone includes flags for unicode normalization with macFUSE that should be updated -for FUSE-T. See [this forum post](https://forum.rclone.org/t/some-unicode-forms-break-mount-on-macos-with-fuse-t/36403) -and [FUSE-T issue #16](https://github.com/macos-fuse-t/fuse-t/issues/16). The following -flag should be added to the `rclone mount` command. - - -o modules=iconv,from_code=UTF-8,to_code=UTF-8 #### Read Only mounts @@ -786,6 +786,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -844,6 +866,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -856,7 +879,7 @@ rclone mount remote:path /path/to/mountpoint [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index bc6efddde..65e64b26d 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -39,6 +39,14 @@ whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. +Rclone will sync the modification times of files and directories if +the backend supports it. If metadata syncing is required then use the +`--metadata` flag. + +Note that the modification time and metadata for the root directory +will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +for more info. + **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. @@ -72,7 +80,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -86,6 +94,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index 79f3e3420..ba285e7e9 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -71,7 +71,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -85,6 +85,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") diff --git a/docs/content/commands/rclone_nfsmount.md b/docs/content/commands/rclone_nfsmount.md new file mode 100644 index 000000000..3948c0b50 --- /dev/null +++ b/docs/content/commands/rclone_nfsmount.md @@ -0,0 +1,929 @@ +--- +title: "rclone nfsmount" +description: "Mount the remote as file system on a mountpoint." +slug: rclone_nfsmount +url: /commands/rclone_nfsmount/ +groups: Filter +status: Experimental +versionIntroduced: v1.65 +# autogenerated - DO NOT EDIT, instead edit the source code in cmd/nfsmount/ and as part of making a release run "make commanddocs" +--- +# rclone nfsmount + +Mount the remote as file system on a mountpoint. + +## Synopsis + +rclone nfsmount allows Linux, FreeBSD, macOS and Windows to +mount any of Rclone's cloud storage systems as a file system with +FUSE. + +First set up your remote using `rclone config`. Check it works with `rclone ls` etc. + +On Linux and macOS, you can run mount in either foreground or background (aka +daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag +to force background mode. On Windows you can run mount in foreground only, +the flag is ignored. + +In background mode rclone acts as a generic Unix mount program: the main +program starts, spawns background rclone process to setup and maintain the +mount, waits until success or timeout and exits with appropriate code +(killing the child process if it fails). + +On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount` +is an **empty** **existing** directory: + + rclone nfsmount remote:path/to/files /path/to/local/mount + +On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows) +for details. If foreground mount is used interactively from a console window, +rclone will serve the mount and occupy the console so another window should be +used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C. + +The following examples will mount to an automatically assigned drive, +to specific drive letter `X:`, to path `C:\path\parent\mount` +(where parent directory or drive must exist, and mount must **not** exist, +and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and +the last example will mount as network share `\\cloud\remote` and map it to an +automatically assigned drive: + + rclone nfsmount remote:path/to/files * + rclone nfsmount remote:path/to/files X: + rclone nfsmount remote:path/to/files C:\path\parent\mount + rclone nfsmount remote:path/to/files \\cloud\remote + +When the program ends while in foreground mode, either via Ctrl+C or receiving +a SIGINT or SIGTERM signal, the mount should be automatically stopped. + +When running in background mode the user will have to stop the mount manually: + + # Linux + fusermount -u /path/to/local/mount + # OS X + umount /path/to/local/mount + +The umount operation can fail, for example when the mountpoint is busy. +When that happens, it is the user's responsibility to stop the mount manually. + +The size of the mounted file system will be set according to information retrieved +from the remote, the same as returned by the [rclone about](https://rclone.org/commands/rclone_about/) +command. Remotes with unlimited storage may report the used size only, +then an additional 1 PiB of free space is assumed. If the remote does not +[support](https://rclone.org/overview/#optional-features) the about feature +at all, then 1 PiB is set as both the total and the free size. + +## Installing on Windows + +To run rclone nfsmount on Windows, you will need to +download and install [WinFsp](http://www.secfs.net/winfsp/). + +[WinFsp](https://github.com/winfsp/winfsp) is an open-source +Windows File System Proxy which makes it easy to write user space file +systems for Windows. It provides a FUSE emulation layer which rclone +uses combination with [cgofuse](https://github.com/winfsp/cgofuse). +Both of these packages are by Bill Zissimopoulos who was very helpful +during the implementation of rclone nfsmount for Windows. + +### Mounting modes on windows + +Unlike other operating systems, Microsoft Windows provides a different filesystem +type for network and fixed drives. It optimises access on the assumption fixed +disk drives are fast and reliable, while network drives have relatively high latency +and less reliability. Some settings can also be differentiated between the two types, +for example that Windows Explorer should just display icons and not create preview +thumbnails for image and video files on network drives. + +In most cases, rclone will mount the remote as a normal, fixed disk drive by default. +However, you can also choose to mount it as a remote network drive, often described +as a network share. If you mount an rclone remote using the default, fixed drive mode +and experience unexpected program errors, freezes or other issues, consider mounting +as a network drive instead. + +When mounting as a fixed disk drive you can either mount to an unused drive letter, +or to a path representing a **nonexistent** subdirectory of an **existing** parent +directory or drive. Using the special value `*` will tell rclone to +automatically assign the next available drive letter, starting with Z: and moving backward. +Examples: + + rclone nfsmount remote:path/to/files * + rclone nfsmount remote:path/to/files X: + rclone nfsmount remote:path/to/files C:\path\parent\mount + rclone nfsmount remote:path/to/files X: + +Option `--volname` can be used to set a custom volume name for the mounted +file system. The default is to use the remote name and path. + +To mount as network drive, you can add option `--network-mode` +to your nfsmount command. Mounting to a directory path is not supported in +this mode, it is a limitation Windows imposes on junctions, so the remote must always +be mounted to a drive letter. + + rclone nfsmount remote:path/to/files X: --network-mode + +A volume name specified with `--volname` will be used to create the network share path. +A complete UNC path, such as `\\cloud\remote`, optionally with path +`\\cloud\remote\madeup\path`, will be used as is. Any other +string will be used as the share part, after a default prefix `\\server\`. +If no volume name is specified then `\\server\share` will be used. +You must make sure the volume name is unique when you are mounting more than one drive, +or else the mount command will fail. The share name will treated as the volume label for +the mapped drive, shown in Windows Explorer etc, while the complete +`\\server\share` will be reported as the remote UNC path by +`net use` etc, just like a normal network drive mapping. + +If you specify a full network share UNC path with `--volname`, this will implicitly +set the `--network-mode` option, so the following two examples have same result: + + rclone nfsmount remote:path/to/files X: --network-mode + rclone nfsmount remote:path/to/files X: --volname \\server\share + +You may also specify the network share UNC path as the mountpoint itself. Then rclone +will automatically assign a drive letter, same as with `*` and use that as +mountpoint, and instead use the UNC path specified as the volume name, as if it were +specified with the `--volname` option. This will also implicitly set +the `--network-mode` option. This means the following two examples have same result: + + rclone nfsmount remote:path/to/files \\cloud\remote + rclone nfsmount remote:path/to/files * --volname \\cloud\remote + +There is yet another way to enable network mode, and to set the share path, +and that is to pass the "native" libfuse/WinFsp option directly: +`--fuse-flag --VolumePrefix=\server\share`. Note that the path +must be with just a single backslash prefix in this case. + + +*Note:* In previous versions of rclone this was the only supported method. + +[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) + +See also [Limitations](#limitations) section below. + +### Windows filesystem permissions + +The FUSE emulation layer on Windows must convert between the POSIX-based +permission model used in FUSE, and the permission model used in Windows, +based on access-control lists (ACL). + +The mounted filesystem will normally get three entries in its access-control list (ACL), +representing permissions for the POSIX permission scopes: Owner, group and others. +By default, the owner and group will be taken from the current user, and the built-in +group "Everyone" will be used to represent others. The user/group can be customized +with FUSE options "UserName" and "GroupName", +e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`. +The permissions on each entry will be set according to [options](#options) +`--dir-perms` and `--file-perms`, which takes a value in traditional Unix +[numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation). + +The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`, +i.e. read and write permissions to everyone. This means you will not be able +to start any programs from the mount. To be able to do that you must add +execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it +to everyone. If the program needs to write files, chances are you will +have to enable [VFS File Caching](#vfs-file-caching) as well (see also +[limitations](#limitations)). Note that the default write permission have +some restrictions for accounts other than the owner, specifically it lacks +the "write extended attributes", as explained next. + +The mapping of permissions is not always trivial, and the result you see in +Windows Explorer may not be exactly like you expected. For example, when setting +a value that includes write access for the group or others scope, this will be +mapped to individual permissions "write attributes", "write data" and +"append data", but not "write extended attributes". Windows will then show this +as basic permission "Special" instead of "Write", because "Write" also covers +the "write extended attributes" permission. When setting digit 0 for group or +others, to indicate no permissions, they will still get individual permissions +"read attributes", "read extended attributes" and "read permissions". This is +done for compatibility reasons, e.g. to allow users without additional +permissions to be able to read basic metadata about files like in Unix. + +WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity", +that allows the complete specification of file security descriptors using +[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format). +With this you get detailed control of the resulting permissions, compared +to use of the POSIX permissions described above, and no additional permissions +will be added automatically for compatibility with Unix. Some example use +cases will following. + +If you set POSIX permissions for only allowing access to the owner, +using `--file-perms 0600 --dir-perms 0700`, the user group and the built-in +"Everyone" group will still be given some special permissions, as described +above. Some programs may then (incorrectly) interpret this as the file being +accessible by everyone, for example an SSH client may warn about "unprotected +private key file". You can work around this by specifying +`-o FileSecurity="D:P(A;;FA;;;OW)"`, which sets file all access (FA) to the +owner (OW), and nothing else. + +When setting write permissions then, except for the owner, this does not +include the "write extended attributes" permission, as mentioned above. +This may prevent applications from writing to files, giving permission denied +error instead. To set working write permissions for the built-in "Everyone" +group, similar to what it gets by default but with the addition of the +"write extended attributes", you can specify +`-o FileSecurity="D:P(A;;FRFW;;;WD)"`, which sets file read (FR) and file +write (FW) to everyone (WD). If file execute (FX) is also needed, then change +to `-o FileSecurity="D:P(A;;FRFWFX;;;WD)"`, or set file all access (FA) to +get full access permissions, including delete, with +`-o FileSecurity="D:P(A;;FA;;;WD)"`. + +### Windows caveats + +Drives created as Administrator are not visible to other accounts, +not even an account that was elevated to Administrator with the +User Account Control (UAC) feature. A result of this is that if you mount +to a drive letter from a Command Prompt run as Administrator, and then try +to access the same drive from Windows Explorer (which does not run as +Administrator), you will not be able to see the mounted drive. + +If you don't need to access the drive from applications running with +administrative privileges, the easiest way around this is to always +create the mount from a non-elevated command prompt. + +To make mapped drives available to the user account that created them +regardless if elevated or not, there is a special Windows setting called +[linked connections](https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry) +that can be enabled. + +It is also possible to make a drive mount available to everyone on the system, +by running the process creating it as the built-in SYSTEM account. +There are several ways to do this: One is to use the command-line +utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec), +from Microsoft's Sysinternals suite, which has option `-s` to start +processes as the SYSTEM account. Another alternative is to run the mount +command from a Windows Scheduled Task, or a Windows Service, configured +to run as the SYSTEM account. A third alternative is to use the +[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)). +Read more in the [install documentation](https://rclone.org/install/). +Note that when running rclone as another user, it will not use +the configuration file from your profile unless you tell it to +with the [`--config`](https://rclone.org/docs/#config-config-file) option. +Note also that it is now the SYSTEM account that will have the owner +permissions, and other accounts will have permissions according to the +group or others scopes. As mentioned above, these will then not get the +"write extended attributes" permission, and this may prevent writing to +files. You can work around this with the FileSecurity option, see +example above. + +Note that mapping to a directory path, instead of a drive letter, +does not suffer from the same limitations. + +## Mounting on macOS + +Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/) +(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional +FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system +which "mounts" via an NFSv4 local server. + +#### Unicode Normalization + +It is highly recommended to keep the default of `--no-unicode-normalization=false` +for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity). + +### NFS mount + +This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) command and mounts +it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to +send SIGTERM signal to the rclone process using |kill| command to stop the mount. + +Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler. +This should not be set too low or you may experience errors when trying to access files. The default is 1000000, +but consider lowering this limit if the server's system resource usage causes problems. + +### macFUSE Notes + +If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from +the website, rclone will locate the macFUSE libraries without any further intervention. +If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager, +the following addition steps are required. + + sudo mkdir /usr/local/lib + cd /usr/local/lib + sudo ln -s /opt/local/lib/libfuse.2.dylib + +### FUSE-T Limitations, Caveats, and Notes + +There are some limitations, caveats, and notes about how it works. These are current as +of FUSE-T version 1.0.14. + +#### ModTime update on read + +As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats): + +> File access and modification times cannot be set separately as it seems to be an +> issue with the NFS client which always modifies both. Can be reproduced with +> 'touch -m' and 'touch -a' commands + +This means that viewing files with various tools, notably macOS Finder, will cause rlcone +to update the modification time of the file. This may make rclone upload a full new copy +of the file. + +#### Read Only mounts + +When mounting with `--read-only`, attempts to write to files will fail *silently* as +opposed to with a clear warning as in macFUSE. + +## Limitations + +Without the use of `--vfs-cache-mode` this can only write files +sequentially, it can only seek when reading. This means that many +applications won't work with their files on an rclone mount without +`--vfs-cache-mode writes` or `--vfs-cache-mode full`. +See the [VFS File Caching](#vfs-file-caching) section for more info. +When using NFS mount on macOS, if you don't specify |--vfs-cache-mode| +the mount point will be read-only. + +The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) +do not support the concept of empty directories, so empty +directories will have a tendency to disappear once they fall out of +the directory cache. + +When `rclone mount` is invoked on Unix with `--daemon` flag, the main rclone +program will wait for the background mount to become ready or until the timeout +specified by the `--daemon-wait` flag. On Linux it can check mount status using +ProcFS so the flag in fact sets **maximum** time to wait, while the real wait +can be less. On macOS / BSD the time to wait is constant and the check is +performed only at the end. We advise you to set wait time on macOS reasonably. + +Only supported on Linux, FreeBSD, OS X and Windows at the moment. + +## rclone nfsmount vs rclone sync/copy + +File systems expect things to be 100% reliable, whereas cloud storage +systems are a long way from 100% reliable. The rclone sync/copy +commands cope with this with lots of retries. However rclone nfsmount +can't use retries in the same way without making local copies of the +uploads. Look at the [VFS File Caching](#vfs-file-caching) +for solutions to make nfsmount more reliable. + +## Attribute caching + +You can use the flag `--attr-timeout` to set the time the kernel caches +the attributes (size, modification time, etc.) for directory entries. + +The default is `1s` which caches files just long enough to avoid +too many callbacks to rclone from the kernel. + +In theory 0s should be the correct value for filesystems which can +change outside the control of the kernel. However this causes quite a +few problems such as +[rclone using too much memory](https://github.com/rclone/rclone/issues/2157), +[rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112) +and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147). + +The kernel can cache the info about a file for the time given by +`--attr-timeout`. You may see corruption if the remote file changes +length during this window. It will show up as either a truncated file +or a file with garbage on the end. With `--attr-timeout 1s` this is +very unlikely but not impossible. The higher you set `--attr-timeout` +the more likely it is. The default setting of "1s" is the lowest +setting which mitigates the problems above. + +If you set it higher (`10s` or `1m` say) then the kernel will call +back to rclone less often making it more efficient, however there is +more chance of the corruption issue above. + +If files don't change on the remote outside of the control of rclone +then there is no chance of corruption. + +This is the same as setting the attr_timeout option in mount.fuse. + +## Filters + +Note that all the rclone filters can be used to select a subset of the +files to be visible in the mount. + +## systemd + +When running rclone nfsmount as a systemd service, it is possible +to use Type=notify. In this case the service will enter the started state +after the mountpoint has been successfully set up. +Units having the rclone nfsmount service specified as a requirement +will see all files and folders immediately in this mode. + +Note that systemd runs mount units without any environment variables including +`PATH` or `HOME`. This means that tilde (`~`) expansion will not work +and you should provide `--config` and `--cache-dir` explicitly as absolute +paths via rclone arguments. +Since mounting requires the `fusermount` program, rclone will use the fallback +PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount` +is present on this PATH. + +## Rclone as Unix mount helper + +The core Unix program `/bin/mount` normally takes the `-t FSTYPE` argument +then runs the `/sbin/mount.FSTYPE` helper program passing it mount options +as `-o key=val,...` or `--opt=...`. Automount (classic or systemd) behaves +in a similar way. + +rclone by default expects GNU-style flags `--key val`. To run it as a mount +helper you should symlink rclone binary to `/sbin/mount.rclone` and optionally +`/usr/bin/rclonefs`, e.g. `ln -s /usr/bin/rclone /sbin/mount.rclone`. +rclone will detect it and translate command-line arguments appropriately. + +Now you can run classic mounts like this: +``` +mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem +``` + +or create systemd mount units: +``` +# /etc/systemd/system/mnt-data.mount +[Unit] +Description=Mount for /mnt/data +[Mount] +Type=rclone +What=sftp1:subdir +Where=/mnt/data +Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone +``` + +optionally accompanied by systemd automount unit +``` +# /etc/systemd/system/mnt-data.automount +[Unit] +Description=AutoMount for /mnt/data +[Automount] +Where=/mnt/data +TimeoutIdleSec=600 +[Install] +WantedBy=multi-user.target +``` + +or add in `/etc/fstab` a line like +``` +sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0 +``` + +or use classic Automountd. +Remember to provide explicit `config=...,cache-dir=...` as a workaround for +mount units being run without `HOME`. + +Rclone in the mount helper mode will split `-o` argument(s) by comma, replace `_` +by `-` and prepend `--` to get the command-line flags. Options containing commas +or spaces can be wrapped in single or double quotes. Any inner quotes inside outer +quotes of the same type should be doubled. + +Mount option syntax includes a few extra options treated specially: + +- `env.NAME=VALUE` will set an environment variable for the mount process. + This helps with Automountd and Systemd.mount which don't allow setting + custom environment for mount helpers. + Typically you will use `env.HTTPS_PROXY=proxy.host:3128` or `env.HOME=/root` +- `command=cmount` can be used to run `cmount` or any other rclone command + rather than the default `mount`. +- `args2env` will pass mount options to the mount helper running in background + via environment variables instead of command line arguments. This allows to + hide secrets from such commands as `ps` or `pgrep`. +- `vv...` will be transformed into appropriate `--verbose=N` +- standard mount options like `x-systemd.automount`, `_netdev`, `nosuid` and alike + are intended only for Automountd and ignored by rclone. +## VFS - Virtual File System + +This command uses the VFS layer. This adapts the cloud storage objects +that rclone uses into something which looks much more like a disk +filing system. + +Cloud storage objects have lots of properties which aren't like disk +files - you can't extend them or write to the middle of them, so the +VFS layer has to deal with that. Because there is no one right way of +doing this there are various options explained below. + +The VFS layer also implements a directory cache - this caches info +about files and directories (but not the data) in memory. + +## VFS Directory Cache + +Using the `--dir-cache-time` flag, you can control how long a +directory should be considered up to date and not refreshed from the +backend. Changes made through the VFS will appear immediately or +invalidate the cache. + + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + +However, changes made directly on the cloud storage by the web +interface or a different copy of rclone will only be picked up once +the directory cache expires if the backend configured does not support +polling for changes. If the backend supports polling, changes will be +picked up within the polling interval. + +You can send a `SIGHUP` signal to rclone for it to flush all +directory caches, regardless of how old they are. Assuming only one +rclone instance is running, you can reset the cache like this: + + kill -SIGHUP $(pidof rclone) + +If you configure rclone with a [remote control](/rc) then you can use +rclone rc to flush the whole directory cache: + + rclone rc vfs/forget + +Or individual files or directories: + + rclone rc vfs/forget file=path/to/file dir=path/to/dir + +## VFS File Buffering + +The `--buffer-size` flag determines the amount of memory, +that will be used to buffer data in advance. + +Each open file will try to keep the specified amount of data in memory +at all times. The buffered data is bound to one open file and won't be +shared. + +This flag is a upper limit for the used memory per open file. The +buffer will only use memory for data that is downloaded but not not +yet read. If the buffer is empty, only a small amount of memory will +be used. + +The maximum memory used by rclone for buffering can be up to +`--buffer-size * open files`. + +## VFS File Caching + +These flags control the VFS file caching options. File caching is +necessary to make the VFS layer appear compatible with a normal file +system. It can be disabled at the cost of some compatibility. + +For example you'll need to enable VFS caching if you want to read and +write simultaneously to a file. See below for more details. + +Note that the VFS cache is separate from the cache backend and you may +find that you need one or the other or both. + + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + +If run with `-vv` rclone will print the location of the file cache. The +files are stored in the user cache file area which is OS dependent but +can be controlled with `--cache-dir` or setting the appropriate +environment variable. + +The cache has 4 different modes selected by `--vfs-cache-mode`. +The higher the cache mode the more compatible rclone becomes at the +cost of using disk space. + +Note that files are written back to the remote only when they are +closed and if they haven't been accessed for `--vfs-write-back` +seconds. If rclone is quit or dies with files that haven't been +uploaded, these will be uploaded next time rclone is run with the same +flags. + +If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note +that the cache may exceed these quotas for two reasons. Firstly +because it is only checked every `--vfs-cache-poll-interval`. Secondly +because open files cannot be evicted from the cache. When +`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded, +rclone will attempt to evict the least accessed files from the cache +first. rclone will start with files that haven't been accessed for the +longest. This cache flushing strategy is efficient and more relevant +files are likely to remain cached. + +The `--vfs-cache-max-age` will evict files from the cache +after the set time since last access has passed. The default value of +1 hour will start evicting files from cache that haven't been accessed +for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 +and will wait for 1 more hour before evicting. Specify the time with +standard notation, s, m, h, d, w . + +You **should not** run two copies of rclone using the same VFS cache +with the same or overlapping remotes if using `--vfs-cache-mode > off`. +This can potentially cause data corruption if you do. You can work +around this by giving each rclone its own cache hierarchy with +`--cache-dir`. You don't need to worry about this if the remotes in +use don't overlap. + +### --vfs-cache-mode off + +In this mode (the default) the cache will read directly from the remote and write +directly to the remote without caching anything on disk. + +This will mean some operations are not possible + + * Files can't be opened for both read AND write + * Files opened for write can't be seeked + * Existing files opened for write must have O_TRUNC set + * Files open for read with O_TRUNC will be opened write only + * Files open for write only will behave as if O_TRUNC was supplied + * Open modes O_APPEND, O_TRUNC are ignored + * If an upload fails it can't be retried + +### --vfs-cache-mode minimal + +This is very similar to "off" except that files opened for read AND +write will be buffered to disk. This means that files opened for +write will be a lot more compatible, but uses the minimal disk space. + +These operations are not possible + + * Files opened for write only can't be seeked + * Existing files opened for write must have O_TRUNC set + * Files opened for write only will ignore O_APPEND, O_TRUNC + * If an upload fails it can't be retried + +### --vfs-cache-mode writes + +In this mode files opened for read only are still read directly from +the remote, write only and read/write files are buffered to disk +first. + +This mode should support all normal file system operations. + +If an upload fails it will be retried at exponentially increasing +intervals up to 1 minute. + +### --vfs-cache-mode full + +In this mode all reads and writes are buffered to and from disk. When +data is read from the remote this is buffered to disk as well. + +In this mode the files in the cache will be sparse files and rclone +will keep track of which bits of the files it has downloaded. + +So if an application only reads the starts of each file, then rclone +will only buffer the start of the file. These files will appear to be +their full size in the cache, but they will be sparse files with only +the data that has been downloaded present in them. + +This mode should support all normal file system operations and is +otherwise identical to `--vfs-cache-mode` writes. + +When reading a file rclone will read `--buffer-size` plus +`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory +whereas the `--vfs-read-ahead` is buffered on disk. + +When using this mode it is recommended that `--buffer-size` is not set +too large and `--vfs-read-ahead` is set large if required. + +**IMPORTANT** not all file systems support sparse files. In particular +FAT/exFAT do not. Rclone will perform very badly if the cache +directory is on a filesystem which doesn't support sparse files and it +will log an ERROR message if one is detected. + +### Fingerprinting + +Various parts of the VFS use fingerprinting to see if a local file +copy has changed relative to a remote file. Fingerprints are made +from: + +- size +- modification time +- hash + +where available on an object. + +On some backends some of these attributes are slow to read (they take +an extra API call per object, or extra work per object). + +For example `hash` is slow with the `local` and `sftp` backends as +they have to read the entire file and hash it, and `modtime` is slow +with the `s3`, `swift`, `ftp` and `qinqstor` backends because they +need to do an extra API call to fetch it. + +If you use the `--vfs-fast-fingerprint` flag then rclone will not +include the slow operations in the fingerprint. This makes the +fingerprinting less accurate but much faster and will improve the +opening time of cached files. + +If you are running a vfs cache over `local`, `s3` or `swift` backends +then using this flag is recommended. + +Note that if you change the value of this flag, the fingerprints of +the files in the cache may be invalidated and the files will need to +be downloaded again. + +## VFS Chunked Reading + +When rclone reads files from a remote it reads them in chunks. This +means that rather than requesting the whole file rclone reads the +chunk specified. This can reduce the used download quota for some +remotes by requesting only chunks from the remote that are actually +read, at the cost of an increased number of requests. + +These flags control the chunking: + + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + +Rclone will start reading a chunk of size `--vfs-read-chunk-size`, +and then double the size for each read. When `--vfs-read-chunk-size-limit` is +specified, and greater than `--vfs-read-chunk-size`, the chunk size for each +open file will get doubled only until the specified value is reached. If the +value is "off", which is the default, the limit is disabled and the chunk size +will grow indefinitely. + +With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. +When `--vfs-read-chunk-size-limit 500M` is specified, the result would be +0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. + +Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. + +## VFS Performance + +These flags may be used to enable/disable features of the VFS for +performance or other reasons. See also the [chunked reading](#vfs-chunked-reading) +feature. + +In particular S3 and Swift benefit hugely from the `--no-modtime` flag +(or use `--use-server-modtime` for a slightly different effect) as each +read of the modification time takes a transaction. + + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. + +Sometimes rclone is delivered reads or writes out of order. Rather +than seeking rclone will wait a short time for the in sequence read or +write to come in. These flags only come into effect when not using an +on disk cache file. + + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + +When using VFS write caching (`--vfs-cache-mode` with value writes or full), +the global flag `--transfers` can be set to adjust the number of parallel uploads of +modified files from the cache (the related global flag `--checkers` has no effect on the VFS). + + --transfers int Number of file transfers to run in parallel (default 4) + +## VFS Case Sensitivity + +Linux file systems are case-sensitive: two files can differ only +by case, and the exact case must be used when opening a file. + +File systems in modern Windows are case-insensitive but case-preserving: +although existing files can be opened using any case, the exact case used +to create the file is preserved and available for programs to query. +It is not allowed for two files in the same directory to differ only by case. + +Usually file systems on macOS are case-insensitive. It is possible to make macOS +file systems case-sensitive but that is not the default. + +The `--vfs-case-insensitive` VFS flag controls how rclone handles these +two cases. If its value is "false", rclone passes file names to the remote +as-is. If the flag is "true" (or appears without a value on the +command line), rclone may perform a "fixup" as explained below. + +The user may specify a file name to open/delete/rename/etc with a case +different than what is stored on the remote. If an argument refers +to an existing file with exactly the same name, then the case of the existing +file on the disk will be used. However, if a file name with exactly the same +name is not found but a name differing only by case exists, rclone will +transparently fixup the name. This fixup happens only when an existing file +is requested. Case sensitivity of file names created anew by rclone is +controlled by the underlying remote. + +Note that case sensitivity of the operating system running rclone (the target) +may differ from case sensitivity of a file system presented by rclone (the source). +The flag controls whether "fixup" is performed to satisfy the target. + +If the flag is not provided on the command line, then its default value depends +on the operating system where rclone runs: "true" on Windows and macOS, "false" +otherwise. If the flag is provided without a value, then it is "true". + +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + +## VFS Disk Options + +This flag allows you to manually set the statistics about the filing system. +It can be useful when those statistics cannot be read correctly automatically. + + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + +## Alternate report of used bytes + +Some backends, most notably S3, do not report the amount of bytes used. +If you need this information to be available when running `df` on the +filesystem, then pass the flag `--vfs-used-is-size` to rclone. +With this flag set, instead of relying on the backend to report this +information, rclone will scan the whole remote similar to `rclone size` +and compute the total used space itself. + +_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +result is accurate. However, this is very inefficient and may cost lots of API +calls resulting in extra charges. Use it as a last resort and only with caching. + + +``` +rclone nfsmount remote:path /path/to/mountpoint [flags] +``` + +## Options + +``` + --addr string IPaddress:Port or :Port to bind server to + --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows) + --allow-other Allow access to other users (not supported on Windows) + --allow-root Allow access to root user (not supported on Windows) + --async-read Use asynchronous reads (not supported on Windows) (default true) + --attr-timeout Duration Time for which file/directory attributes are cached (default 1s) + --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows) + --daemon-timeout Duration Time limit for rclone to respond to kernel (not supported on Windows) (default 0s) + --daemon-wait Duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s) + --debug-fuse Debug the FUSE internals - needs -v + --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows) + --devname string Set the device name - default is remote:path + --dir-cache-time Duration Time to cache directory entries for (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required) + --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) + -h, --help help for nfsmount + --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki) + --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset) + --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only) + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) + --no-checksum Don't compare checksums on up/download + --no-modtime Don't read/write the modification time (can speed things up) + --no-seek Don't allow seeking in files + --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true) + --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only) + -o, --option stringArray Option for libfuse/WinFsp (repeat if required) + --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) + --read-only Only allow read-only access + --sudo Use sudo to run the mount command as root. + --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) + --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) + --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) + --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-refresh Refreshes the directory cache recursively in the background on start + --vfs-used-is-size rclone size Use the rclone size algorithm for Used size + --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) + --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) + --volname string Set the volume name (supported on Windows and OSX only) + --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) +``` + + +## Filter Options + +Flags for filtering directory listings. + +``` + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +``` + +See the [global flags page](/flags/) for global options not listed here. + +# SEE ALSO + +* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md index abe864473..451e4ccb9 100644 --- a/docs/content/commands/rclone_serve_dlna.md +++ b/docs/content/commands/rclone_serve_dlna.md @@ -347,6 +347,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -392,6 +414,7 @@ rclone serve dlna remote:path [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -404,7 +427,7 @@ rclone serve dlna remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_serve_docker.md b/docs/content/commands/rclone_serve_docker.md index 0366cf6c9..5fc935eac 100644 --- a/docs/content/commands/rclone_serve_docker.md +++ b/docs/content/commands/rclone_serve_docker.md @@ -362,6 +362,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -425,6 +447,7 @@ rclone serve docker [flags] --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -437,7 +460,7 @@ rclone serve docker [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md index 6ba8142e9..8e3136ff5 100644 --- a/docs/content/commands/rclone_serve_ftp.md +++ b/docs/content/commands/rclone_serve_ftp.md @@ -344,6 +344,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -376,7 +398,7 @@ together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program -[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn @@ -473,6 +495,7 @@ rclone serve ftp remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default "anonymous") + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -485,7 +508,7 @@ rclone serve ftp remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md index f599e5284..d060cd120 100644 --- a/docs/content/commands/rclone_serve_http.md +++ b/docs/content/commands/rclone_serve_http.md @@ -445,6 +445,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -477,7 +499,7 @@ together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program -[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn @@ -583,6 +605,7 @@ rclone serve http remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -595,7 +618,7 @@ rclone serve http remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_serve_nfs.md b/docs/content/commands/rclone_serve_nfs.md index bbd28ccf8..c39005f5e 100644 --- a/docs/content/commands/rclone_serve_nfs.md +++ b/docs/content/commands/rclone_serve_nfs.md @@ -4,6 +4,7 @@ description: "Serve the remote as an NFS mount" slug: rclone_serve_nfs url: /commands/rclone_serve_nfs/ groups: Filter +status: Experimental versionIntroduced: v1.65 # autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/nfs/ and as part of making a release run "make commanddocs" --- @@ -26,7 +27,9 @@ NFS mount over local network, you need to specify the listening address and port Modifying files through NFS protocol requires VFS caching. Usually you will need to specify `--vfs-cache-mode` in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode, -the mount will be read-only. +the mount will be read-only. Note also that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the caching handler. +This should not be set too low or you may experience errors when trying to access files. The default is `1000000`, but consider lowering this limit if +the server's system resource usage causes problems. To serve NFS over the network use following command: @@ -353,6 +356,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -387,6 +412,7 @@ rclone serve nfs remote:path [flags] --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for nfs + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) --no-checksum Don't compare checksums on up/download --no-modtime Don't read/write the modification time (can speed things up) --no-seek Don't allow seeking in files @@ -394,6 +420,7 @@ rclone serve nfs remote:path [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -406,7 +433,7 @@ rclone serve nfs remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_serve_s3.md b/docs/content/commands/rclone_serve_s3.md index 986d31119..b4dc707a9 100644 --- a/docs/content/commands/rclone_serve_s3.md +++ b/docs/content/commands/rclone_serve_s3.md @@ -487,6 +487,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -541,6 +563,7 @@ rclone serve s3 remote:path [flags] --server-write-timeout Duration Timeout for server writing data (default 1h0m0s) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -553,7 +576,7 @@ rclone serve s3 remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md index 9181e7be4..ea0fcd5d2 100644 --- a/docs/content/commands/rclone_serve_sftp.md +++ b/docs/content/commands/rclone_serve_sftp.md @@ -376,6 +376,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -408,7 +430,7 @@ together, if `--auth-proxy` is set the authorized keys option will be ignored. There is an example program -[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py) +[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. The program's job is to take a `user` and `pass` on the input and turn @@ -505,6 +527,7 @@ rclone serve sftp remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -517,7 +540,7 @@ rclone serve sftp remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md index cbd0ff0c0..4994f0b56 100644 --- a/docs/content/commands/rclone_serve_webdav.md +++ b/docs/content/commands/rclone_serve_webdav.md @@ -474,6 +474,28 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". +The `--no-unicode-normalization` flag controls whether a similar "fixup" is +performed for filenames that differ but are [canonically +equivalent](https://en.wikipedia.org/wiki/Unicode_equivalence) with respect to +unicode. Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. It is +therefore highly recommended to keep the default of `false` on macOS, to avoid +encoding compatibility issues. + +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the `--vfs-block-norm-dupes` +flag allows hiding these duplicates. This comes with a performance tradeoff, as +rclone will have to scan the entire directory for duplicates when listing a +directory. For this reason, it is recommended to leave this disabled if not +needed. However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same filename, an odd +situation will occur: both versions of the file will be visible in the mount, +and both will appear to be editable, however, editing either version will +actually result in only the NFD version getting edited under the hood. `--vfs-block- +norm-dupes` prevents this confusion by detecting this scenario, hiding the +duplicates, and logging an error, similar to how this is handled in `rclone +sync`. + ## VFS Disk Options This flag allows you to manually set the statistics about the filing system. @@ -614,6 +636,7 @@ rclone serve webdav remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -626,7 +649,7 @@ rclone serve webdav remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index 5ecbb3c03..41fd63c89 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -43,15 +43,22 @@ the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory. +Rclone will sync the modification times of files and directories if +the backend supports it. If metadata syncing is required then use the +`--metadata` flag. + +Note that the modification time and metadata for the root directory +will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +for more info. + **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics **Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info. -## Logger Flags +# Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` -and `--error` flags write paths, one per line, to the file name (or +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, one per line, to the file name (or stdout if it is `-`) supplied. What they write is described in the help below. For example `--differ` will write all paths which are present on both the source and destination but different. @@ -86,6 +93,7 @@ Note also that each file is logged during the sync, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file (which may or may not match what actually DID.) + ``` rclone sync source:path dest:path [flags] ``` @@ -93,8 +101,24 @@ rclone sync source:path dest:path [flags] ## Options ``` + --absolute Put a leading / in front of path names + --combined string Make a combined report of changes to this file --create-empty-src-dirs Create empty source dirs on destination after sync + --csv Output in CSV format + --dest-after string Report all files that exist on the dest post-sync + --differ string Report all non-matching files to this file + -d, --dir-slash Append a slash to directory names (default true) + --dirs-only Only list directories + --error string Report all files with errors (hashing or reading) to this file + --files-only Only list files (default true) + -F, --format string Output format - see lsf help for details (default "p") + --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5") -h, --help help for sync + --match string Report all matching files to this file + --missing-on-dst string Report all files missing from the destination to this file + --missing-on-src string Report all files missing from the source to this file + -s, --separator string Separator for the items in the format (default ";") + -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) ``` @@ -112,7 +136,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -126,6 +150,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -145,6 +170,7 @@ Flags just used for `rclone sync`. --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) diff --git a/docs/content/compress.md b/docs/content/compress.md index 671735460..3ede8adc5 100644 --- a/docs/content/compress.md +++ b/docs/content/compress.md @@ -158,6 +158,17 @@ Properties: - Type: SizeSuffix - Default: 20Mi +#### --compress-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_COMPRESS_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. diff --git a/docs/content/crypt.md b/docs/content/crypt.md index 44dd1d753..53561e6b8 100644 --- a/docs/content/crypt.md +++ b/docs/content/crypt.md @@ -579,6 +579,22 @@ Properties: - Type: bool - Default: false +#### --crypt-strict-names + +If set, this will raise an error when crypt comes across a filename that can't be decrypted. + +(By default, rclone will just log a NOTICE and continue as normal.) +This can happen if encrypted and unencrypted files are stored in the same +directory (which is not recommended.) It may also indicate a more serious +problem that should be investigated. + +Properties: + +- Config: strict_names +- Env Var: RCLONE_CRYPT_STRICT_NAMES +- Type: bool +- Default: false + #### --crypt-filename-encoding How to encode the encrypted filename to text string. @@ -616,6 +632,17 @@ Properties: - Type: string - Default: ".bin" +#### --crypt-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CRYPT_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. diff --git a/docs/content/drive.md b/docs/content/drive.md index 854b185eb..2aaa70394 100644 --- a/docs/content/drive.md +++ b/docs/content/drive.md @@ -1386,10 +1386,23 @@ Properties: - "true" - Get GCP IAM credentials from the environment (env vars or IAM). +#### --drive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_DRIVE_DESCRIPTION +- Type: string +- Required: false + ### Metadata User metadata is stored in the properties field of the drive object. +Metadata is supported on files and directories. + Here are the possible system metadata items for the drive backend. | Name | Help | Type | Example | Read Only | diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md index 91b15688e..476f5b264 100644 --- a/docs/content/dropbox.md +++ b/docs/content/dropbox.md @@ -453,6 +453,17 @@ Properties: - Type: Duration - Default: 10m0s +#### --dropbox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_DROPBOX_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/fichier.md b/docs/content/fichier.md index b5a824505..1ed524273 100644 --- a/docs/content/fichier.md +++ b/docs/content/fichier.md @@ -195,6 +195,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot +#### --fichier-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FICHIER_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/filefabric.md b/docs/content/filefabric.md index 1666cd6bc..f21f3dd0a 100644 --- a/docs/content/filefabric.md +++ b/docs/content/filefabric.md @@ -274,4 +274,15 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,InvalidUtf8,Dot +#### --filefabric-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FILEFABRIC_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} diff --git a/docs/content/flags.md b/docs/content/flags.md index 4ce4c079f..d5768783e 100644 --- a/docs/content/flags.md +++ b/docs/content/flags.md @@ -23,7 +23,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don't skip files that match size and time - transfer all files + -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -37,6 +37,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don't check the destination, copy regardless --no-traverse Don't traverse destination file system on copy + --no-update-dir-modtime Don't update directory modification times --no-update-modtime Don't update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial") @@ -57,6 +58,7 @@ Flags just used for `rclone sync`. --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -112,7 +114,7 @@ General networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0") ``` @@ -296,14 +298,7 @@ Flags to control the Remote Control API. Backend only flags. These can be set in the config file also. ``` - --acd-auth-url string Auth server URL - --acd-client-id string OAuth Client Id - --acd-client-secret string OAuth Client Secret - --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi) - --acd-token string OAuth Access Token as a JSON blob - --acd-token-url string Token server url - --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s) + --alias-description string Description of the remote --alias-remote string Remote or path to alias --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name @@ -314,6 +309,8 @@ Backend only flags. These can be set in the config file also. --azureblob-client-id string The ID of the client in use --azureblob-client-secret string One of the service principal's client secrets --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth + --azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion + --azureblob-description string Description of the remote --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-disable-checksum Don't store MD5 checksum with object metadata --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) @@ -344,6 +341,7 @@ Backend only flags. These can be set in the config file also. --azurefiles-client-secret string One of the service principal's client secrets --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth --azurefiles-connection-string string Azure Files Connection String + --azurefiles-description string Description of the remote --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot) --azurefiles-endpoint string Endpoint for the service --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI) @@ -363,8 +361,9 @@ Backend only flags. These can be set in the config file also. --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) + --b2-description string Description of the remote --b2-disable-checksum Disable checksums for large (> upload cutoff) files - --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w) + --b2-download-auth-duration Duration Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-url string Custom endpoint for downloads --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service @@ -383,6 +382,7 @@ Backend only flags. These can be set in the config file also. --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) + --box-description string Description of the remote --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) @@ -399,6 +399,7 @@ Backend only flags. These can be set in the config file also. --cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend") --cache-db-purge Clear all the cached data for this remote on start --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-description string Description of the remote --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) @@ -412,15 +413,19 @@ Backend only flags. These can be set in the config file also. --cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-writes Cache file data on writes through the FS --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi) + --chunker-description string Description of the remote --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default "md5") --chunker-remote string Remote to chunk/unchunk + --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining + --compress-description string Description of the remote --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress -L, --copy-links Follow symlinks and copy the pointed to item + --crypt-description string Description of the remote --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true) --crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32") --crypt-filename-encryption string How to encrypt the filenames (default "standard") @@ -431,6 +436,7 @@ Backend only flags. These can be set in the config file also. --crypt-remote string Remote to encrypt/decrypt --crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead --crypt-show-mapping For all files listed show how the names encrypt + --crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted --crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin") --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded --drive-allow-import-name-change Allow the filetype to change when uploading Google docs @@ -440,6 +446,7 @@ Backend only flags. These can be set in the config file also. --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut + --drive-description string Description of the remote --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) @@ -488,6 +495,7 @@ Backend only flags. These can be set in the config file also. --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret + --dropbox-description string Description of the remote --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) @@ -497,10 +505,12 @@ Backend only flags. These can be set in the config file also. --dropbox-token-url string Token server url --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-cdn Set if you wish to use CDN download links + --fichier-description string Description of the remote --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-shared-folder string If you want to download a shared folder, add this parameter + --filefabric-description string Description of the remote --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-permanent-token string Permanent Authentication Token --filefabric-root-folder-id string ID of the root folder @@ -511,6 +521,7 @@ Backend only flags. These can be set in the config file also. --ftp-ask-password Allow asking for FTP password when needed --ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s) --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited + --ftp-description string Description of the remote --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) @@ -536,6 +547,7 @@ Backend only flags. These can be set in the config file also. --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-decompress If set this will decompress gzip encoded objects + --gcs-description string Description of the remote --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-endpoint string Endpoint for the service @@ -556,6 +568,7 @@ Backend only flags. These can be set in the config file also. --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s) --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret + --gphotos-description string Description of the remote --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-include-archived Also view and download archived media --gphotos-read-only Set to make the Google Photos backend read only @@ -564,10 +577,12 @@ Backend only flags. These can be set in the config file also. --gphotos-token string OAuth Access Token as a JSON blob --gphotos-token-url string Token server url --hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default) + --hasher-description string Description of the remote --hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy + --hdfs-description string Description of the remote --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-namenode CommaSepList Hadoop name nodes and ports --hdfs-service-principal-name string Kerberos service principal name for the namenode @@ -576,6 +591,7 @@ Backend only flags. These can be set in the config file also. --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) --hidrive-client-id string OAuth Client Id --hidrive-client-secret string OAuth Client Secret + --hidrive-description string Description of the remote --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot) --hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1") @@ -586,10 +602,12 @@ Backend only flags. These can be set in the config file also. --hidrive-token-url string Token server url --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) + --http-description string Description of the remote --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don't use HEAD requests --http-no-slash Set this if the site doesn't end directories with / --http-url string URL of HTTP host to connect to + --imagekit-description string Description of the remote --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket) --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true @@ -598,6 +616,7 @@ Backend only flags. These can be set in the config file also. --imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2" --imagekit-versions Include old versions in directory listings --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-description string Description of the remote --internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org") @@ -607,6 +626,7 @@ Backend only flags. These can be set in the config file also. --jottacloud-auth-url string Auth server URL --jottacloud-client-id string OAuth Client Id --jottacloud-client-secret string OAuth Client Secret + --jottacloud-description string Description of the remote --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -615,6 +635,7 @@ Backend only flags. These can be set in the config file also. --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi) + --koofr-description string Description of the remote --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use --koofr-mountid string Mount ID of the mount to use @@ -622,10 +643,12 @@ Backend only flags. These can be set in the config file also. --koofr-provider string Choose your storage provider --koofr-setmtime Does the backend support setting modification time (default true) --koofr-user string Your user name + --linkbox-description string Description of the remote --linkbox-token string Token from https://www.linkbox.to/admin/account -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive + --local-description string Description of the remote --local-encoding Encoding The encoding for the backend (default Slash,Dot) --local-no-check-updated Don't check to see if the files change during upload --local-no-preallocate Disable preallocation of disk space for transferred files @@ -638,6 +661,7 @@ Backend only flags. These can be set in the config file also. --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-client-id string OAuth Client Id --mailru-client-secret string OAuth Client Secret + --mailru-description string Description of the remote --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) @@ -648,12 +672,15 @@ Backend only flags. These can be set in the config file also. --mailru-token-url string Token server url --mailru-user string User name (usually email) --mega-debug Output more debug from Mega + --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash --mega-pass string Password (obscured) --mega-use-https Use HTTPS for transfers --mega-user string User name + --memory-description string Description of the remote --netstorage-account string Set the NetStorage account name + --netstorage-description string Description of the remote --netstorage-host string Domain+path of NetStorage host to connect to --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https") --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) @@ -665,6 +692,7 @@ Backend only flags. These can be set in the config file also. --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-delta If set rclone will use delta listing to implement recursive listings + --onedrive-description string Description of the remote --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -674,6 +702,7 @@ Backend only flags. These can be set in the config file also. --onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous") --onedrive-link-type string Set the type of the links created by the link command (default "view") --onedrive-list-chunk int Size of listing chunk (default 1000) + --onedrive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off) --onedrive-no-versions Remove all versions on modifying operations --onedrive-region string Choose national cloud region for OneDrive (default "global") --onedrive-root-folder-id string ID of the root folder @@ -687,6 +716,7 @@ Backend only flags. These can be set in the config file also. --oos-config-profile string Profile name inside the oci config file (default "Default") --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-timeout Duration Timeout for copy (default 1m0s) + --oos-description string Description of the remote --oos-disable-checksum Don't store MD5 checksum with object metadata --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API @@ -705,12 +735,14 @@ Backend only flags. These can be set in the config file also. --oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) + --opendrive-description string Description of the remote --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password (obscured) --opendrive-username string Username --pcloud-auth-url string Auth server URL --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret + --pcloud-description string Description of the remote --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default "api.pcloud.com") --pcloud-password string Your pcloud password (obscured) @@ -721,6 +753,7 @@ Backend only flags. These can be set in the config file also. --pikpak-auth-url string Auth server URL --pikpak-client-id string OAuth Client Id --pikpak-client-secret string OAuth Client Secret + --pikpak-description string Description of the remote --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-pass string Pikpak password (obscured) @@ -733,11 +766,13 @@ Backend only flags. These can be set in the config file also. --premiumizeme-auth-url string Auth server URL --premiumizeme-client-id string OAuth Client Id --premiumizeme-client-secret string OAuth Client Secret + --premiumizeme-description string Description of the remote --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token-url string Token server url --protondrive-2fa string The 2FA code --protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone") + --protondrive-description string Description of the remote --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) @@ -748,12 +783,14 @@ Backend only flags. These can be set in the config file also. --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret + --putio-description string Description of the remote --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-token string OAuth Access Token as a JSON blob --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) + --qingstor-description string Description of the remote --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-env-auth Get QingStor credentials from runtime @@ -762,18 +799,21 @@ Backend only flags. These can be set in the config file also. --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to --quatrix-api-key string API key for accessing Quatrix account + --quatrix-description string Description of the remote --quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s") --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-host string Host name of Quatrix account --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi) --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) + --quatrix-skip-project-folders Skip project folders in operations --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --s3-decompress If set this will decompress gzip encoded objects + --s3-description string Description of the remote --s3-directory-markers Upload an empty object with a trailing slash when a new directory is created --s3-disable-checksum Don't store MD5 checksum with object metadata --s3-disable-http2 Disable usage of http2 for S3 backends @@ -808,19 +848,22 @@ Backend only flags. These can be set in the config file also. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key --s3-storage-class string The storage class to use when storing new objects in S3 --s3-sts-endpoint string Endpoint for STS - --s3-upload-concurrency int Concurrency for multipart uploads (default 4) + --s3-upload-concurrency int Concurrency for multipart uploads and copies (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) + --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --s3-version-at Time Show file versions as they were at the specified time (default off) + --s3-version-deleted Show deleted file markers when using versions --s3-versions Include old versions in directory listings --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn't exist + --seafile-description string Description of the remote --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library --seafile-library-key string Library password (for encrypted libraries only) (obscured) @@ -832,6 +875,7 @@ Backend only flags. These can be set in the config file also. --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-copy-is-hardlink Set to enable server side copies using hardlinks + --sftp-description string Description of the remote --sftp-disable-concurrent-reads If set don't use concurrent reads --sftp-disable-concurrent-writes If set don't use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -866,6 +910,7 @@ Backend only flags. These can be set in the config file also. --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-client-id string OAuth Client Id --sharefile-client-secret string OAuth Client Secret + --sharefile-description string Description of the remote --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder @@ -874,10 +919,12 @@ Backend only flags. These can be set in the config file also. --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980") + --sia-description string Description of the remote --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --skip-links Don't warn about skipped symlinks --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) + --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true) @@ -889,6 +936,7 @@ Backend only flags. These can be set in the config file also. --smb-user string SMB username (default "$USER") --storj-access-grant string Access grant --storj-api-key string API key + --storj-description string Description of the remote --storj-passphrase string Encryption passphrase --storj-provider string Choose an authentication method (default "existing") --storj-satellite-address string Satellite address (default "us1.storj.io") @@ -897,6 +945,7 @@ Backend only flags. These can be set in the config file also. --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id + --sugarsync-description string Description of the remote --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key @@ -910,6 +959,7 @@ Backend only flags. These can be set in the config file also. --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) + --swift-description string Description of the remote --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") @@ -929,17 +979,21 @@ Backend only flags. These can be set in the config file also. --union-action-policy string Policy to choose upstream on ACTION category (default "epall") --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs") + --union-description string Description of the remote --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token + --uptobox-description string Description of the remote --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-private Set to make uploaded files private --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token + --webdav-description string Description of the remote --webdav-encoding string The encoding for the backend --webdav-headers CommaSepList Set HTTP headers for all transactions --webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi) + --webdav-owncloud-exclude-shares Exclude ownCloud shares --webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to @@ -948,6 +1002,7 @@ Backend only flags. These can be set in the config file also. --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret + --yandex-description string Description of the remote --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-token string OAuth Access Token as a JSON blob @@ -955,6 +1010,7 @@ Backend only flags. These can be set in the config file also. --zoho-auth-url string Auth server URL --zoho-client-id string OAuth Client Id --zoho-client-secret string OAuth Client Secret + --zoho-description string Description of the remote --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-region string Zoho region to connect to --zoho-token string OAuth Access Token as a JSON blob diff --git a/docs/content/ftp.md b/docs/content/ftp.md index b00bb8acc..748545af6 100644 --- a/docs/content/ftp.md +++ b/docs/content/ftp.md @@ -453,6 +453,17 @@ Properties: - "Ctl,LeftPeriod,Slash" - VsFTPd can't handle file names starting with dot +#### --ftp-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FTP_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md index 561d536c5..cd9dc6a34 100644 --- a/docs/content/googlecloudstorage.md +++ b/docs/content/googlecloudstorage.md @@ -699,6 +699,17 @@ Properties: - Type: Encoding - Default: Slash,CrLf,InvalidUtf8,Dot +#### --gcs-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_GCS_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md index 76a9af6d6..1bca0095b 100644 --- a/docs/content/googlephotos.md +++ b/docs/content/googlephotos.md @@ -461,6 +461,17 @@ Properties: - Type: Duration - Default: 10m0s +#### --gphotos-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_GPHOTOS_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/hasher.md b/docs/content/hasher.md index f2606fc01..e7968cb28 100644 --- a/docs/content/hasher.md +++ b/docs/content/hasher.md @@ -224,6 +224,17 @@ Properties: - Type: SizeSuffix - Default: 0 +#### --hasher-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HASHER_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. diff --git a/docs/content/hdfs.md b/docs/content/hdfs.md index 6e85414c3..ec1ada050 100644 --- a/docs/content/hdfs.md +++ b/docs/content/hdfs.md @@ -232,6 +232,17 @@ Properties: - Type: Encoding - Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot +#### --hdfs-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HDFS_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/hidrive.md b/docs/content/hidrive.md index f48dcb8e9..6c4561764 100644 --- a/docs/content/hidrive.md +++ b/docs/content/hidrive.md @@ -418,6 +418,17 @@ Properties: - Type: Encoding - Default: Slash,Dot +#### --hidrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HIDRIVE_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/http.md b/docs/content/http.md index 0fa8ca0a1..fc557e53c 100644 --- a/docs/content/http.md +++ b/docs/content/http.md @@ -212,6 +212,17 @@ Properties: - Type: bool - Default: false +#### --http-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HTTP_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the http backend. diff --git a/docs/content/imagekit.md b/docs/content/imagekit.md index c0ae147d2..01cf6eaec 100644 --- a/docs/content/imagekit.md +++ b/docs/content/imagekit.md @@ -191,6 +191,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket +#### --imagekit-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_IMAGEKIT_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. diff --git a/docs/content/internetarchive.md b/docs/content/internetarchive.md index 980c534a6..2123cfd4d 100644 --- a/docs/content/internetarchive.md +++ b/docs/content/internetarchive.md @@ -263,6 +263,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot +#### --internetarchive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_INTERNETARCHIVE_DESCRIPTION +- Type: string +- Required: false + ### Metadata Metadata fields provided by Internet Archive. diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md index 3ac1fc370..f4adf93f0 100644 --- a/docs/content/jottacloud.md +++ b/docs/content/jottacloud.md @@ -447,6 +447,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot +#### --jottacloud-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_JOTTACLOUD_DESCRIPTION +- Type: string +- Required: false + ### Metadata Jottacloud has limited support for metadata, currently an extended set of timestamps. diff --git a/docs/content/koofr.md b/docs/content/koofr.md index 3d161297f..005ec08e6 100644 --- a/docs/content/koofr.md +++ b/docs/content/koofr.md @@ -214,6 +214,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --koofr-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_KOOFR_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/linkbox.md b/docs/content/linkbox.md index 4ea172b5e..3332a2ed2 100644 --- a/docs/content/linkbox.md +++ b/docs/content/linkbox.md @@ -68,6 +68,21 @@ Properties: - Type: string - Required: true +### Advanced options + +Here are the Advanced options specific to linkbox (Linkbox). + +#### --linkbox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_LINKBOX_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/local.md b/docs/content/local.md index fec259cd7..d45fefb89 100644 --- a/docs/content/local.md +++ b/docs/content/local.md @@ -569,6 +569,17 @@ Properties: - Type: Encoding - Default: Slash,Dot +#### --local-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_LOCAL_DESCRIPTION +- Type: string +- Required: false + ### Metadata Depending on which OS is in use the local backend may return only some @@ -580,6 +591,8 @@ netbsd, macOS and Solaris. It is **not** supported on Windows yet User metadata is stored as extended attributes (which may not be supported by all file systems) under the "user.*" prefix. +Metadata is supported on files and directories. + Here are the possible system metadata items for the local backend. | Name | Help | Type | Example | Read Only | diff --git a/docs/content/mailru.md b/docs/content/mailru.md index 01de432f2..5020f07f9 100644 --- a/docs/content/mailru.md +++ b/docs/content/mailru.md @@ -412,6 +412,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --mailru-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MAILRU_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/mega.md b/docs/content/mega.md index e553842c0..7537d7d47 100644 --- a/docs/content/mega.md +++ b/docs/content/mega.md @@ -282,6 +282,17 @@ Properties: - Type: Encoding - Default: Slash,InvalidUtf8,Dot +#### --mega-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MEGA_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ### Process `killed` diff --git a/docs/content/memory.md b/docs/content/memory.md index 783fc2815..73f3120bc 100644 --- a/docs/content/memory.md +++ b/docs/content/memory.md @@ -64,4 +64,19 @@ The memory backend replaces the [default restricted characters set](/overview/#restricted-characters). {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/memory/memory.go then run make backenddocs" >}} +### Advanced options + +Here are the Advanced options specific to memory (In memory object storage system.). + +#### --memory-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MEMORY_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} diff --git a/docs/content/netstorage.md b/docs/content/netstorage.md index ce474bed7..7302bd3a4 100644 --- a/docs/content/netstorage.md +++ b/docs/content/netstorage.md @@ -242,6 +242,17 @@ Properties: - "https" - HTTPS protocol +#### --netstorage-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_NETSTORAGE_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the netstorage backend. diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md index 4cf82d773..62a8aea55 100644 --- a/docs/content/opendrive.md +++ b/docs/content/opendrive.md @@ -162,6 +162,17 @@ Properties: - Type: SizeSuffix - Default: 10Mi +#### --opendrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_OPENDRIVE_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/oracleobjectstorage.md b/docs/content/oracleobjectstorage.md index 5e91b8099..070dfad2b 100644 --- a/docs/content/oracleobjectstorage.md +++ b/docs/content/oracleobjectstorage.md @@ -319,6 +319,9 @@ Properties: - use instance principals to authorize an instance to make API calls. - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + - "workload_identity_auth" + - use workload identity to grant OCI Container Engine for Kubernetes workloads policy-driven access to OCI resources using OCI Identity and Access Management (IAM). + - https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm - "resource_principal_auth" - use resource principals to make API calls - "no_auth" @@ -704,6 +707,17 @@ Properties: - "AES256" - AES256 +#### --oos-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_OOS_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the oracleobjectstorage backend. diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md index cb1b0ba36..edf67e7ed 100644 --- a/docs/content/pcloud.md +++ b/docs/content/pcloud.md @@ -288,4 +288,15 @@ Properties: - Type: string - Required: false +#### --pcloud-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PCLOUD_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} diff --git a/docs/content/pikpak.md b/docs/content/pikpak.md index 472e14fd1..c1412dfee 100644 --- a/docs/content/pikpak.md +++ b/docs/content/pikpak.md @@ -240,6 +240,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --pikpak-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PIKPAK_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the pikpak backend. diff --git a/docs/content/premiumizeme.md b/docs/content/premiumizeme.md index 2d462c60f..7ed301e4e 100644 --- a/docs/content/premiumizeme.md +++ b/docs/content/premiumizeme.md @@ -202,6 +202,17 @@ Properties: - Type: Encoding - Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --premiumizeme-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PREMIUMIZEME_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/protondrive.md b/docs/content/protondrive.md index 3b23d39fe..ef42ae512 100644 --- a/docs/content/protondrive.md +++ b/docs/content/protondrive.md @@ -331,6 +331,17 @@ Properties: - Type: bool - Default: true +#### --protondrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/putio.md b/docs/content/putio.md index 8c8722b9d..6c3e377f2 100644 --- a/docs/content/putio.md +++ b/docs/content/putio.md @@ -199,6 +199,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --putio-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_PUTIO_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/qingstor.md b/docs/content/qingstor.md index 69fd51f81..c0fc0911a 100644 --- a/docs/content/qingstor.md +++ b/docs/content/qingstor.md @@ -310,6 +310,17 @@ Properties: - Type: Encoding - Default: Slash,Ctl,InvalidUtf8 +#### --qingstor-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_QINGSTOR_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/rc.md b/docs/content/rc.md index 9fb42917a..953bc4a51 100644 --- a/docs/content/rc.md +++ b/docs/content/rc.md @@ -607,6 +607,26 @@ See the [config password](/commands/rclone_config_password/) command for more in **Authentication is required for this call.** +### config/paths: Reads the config file path and other important paths. {#config-paths} + +Returns a JSON object with the following keys: + +- config: path to config file +- cache: path to root of cache directory +- temp: path to root of temporary directory + +Eg + + { + "cache": "/home/USER/.cache/rclone", + "config": "/home/USER/.rclone.conf", + "temp": "/tmp" + } + +See the [config paths](/commands/rclone_config_paths/) command for more information on the above. + +**Authentication is required for this call.** + ### config/providers: Shows how providers are configured in the config file. {#config-providers} Returns a JSON object: @@ -847,15 +867,12 @@ Returns the following values: [ { "bytes": total transferred bytes for this file, - "eta": estimated time in seconds until file transfer completion (may be nil) + "eta": estimated time in seconds until file transfer completion "name": name of the file, "percentage": progress of the file transfer in percent, "speed": average speed over the whole transfer in bytes per second, "speedAvg": current speed in bytes per second as an exponentially weighted moving average, "size": size of the file in bytes - "group": stats group this transfer is part of - "srcFs": name of the source remote (not present if not known) - "dstFs": name of the destination remote (not present if not known) } ], "checking": an array of names of currently active file checks @@ -907,12 +924,9 @@ Returns the following values: "size": size of the file in bytes, "bytes": total transferred bytes for this file, "checked": if the transfer is only checked (skipped, deleted), - "started_at": time the transfer was started at (RFC3339 format, eg `"2000-01-01T01:00:00.085742121Z"`), - "completed_at": time the transfer was completed at (RFC3339 format, only present if transfer is completed), + "timestamp": integer representing millisecond unix epoch, "error": string description of the error (empty if successful), - "group": string representing which stats group this is part of, - "srcFs": name of the source remote (not present if not known), - "dstFs": name of the destination remote (not present if not known), + "jobid": id of the job that this transfer belongs to } ] } @@ -1398,6 +1412,50 @@ This command does not have a command line equivalent so use this instead: rclone rc --loopback operations/fsinfo fs=remote: +### operations/hashsum: Produces a hashsum file for all the objects in the path. {#operations-hashsum} + +Produces a hash file for all the objects in the path using the hash +named. The output is in the same format as the standard +md5sum/sha1sum tool. + +This takes the following parameters: + +- fs - a remote name string e.g. "drive:" for the source, "/" for local filesystem + - this can point to a file and just that file will be returned in the listing. +- hashType - type of hash to be used +- download - check by downloading rather than with hash (boolean) +- base64 - output the hashes in base64 rather than hex (boolean) + +If you supply the download flag, it will download the data from the +remote and create the hash on the fly. This can be useful for remotes +that don't support the given hash or if you really want to check all +the data. + +Note that if you wish to supply a checkfile to check hashes against +the current files then you should use operations/check instead of +operations/hashsum. + +Returns: + +- hashsum - array of strings of the hashes +- hashType - type of hash used + +Example: + + $ rclone rc --loopback operations/hashsum fs=bin hashType=MD5 download=true base64=true + { + "hashType": "md5", + "hashsum": [ + "WTSVLpuiXyJO_kGzJerRLg== backend-versions.sh", + "v1b_OlWCJO9LtNq3EIKkNQ== bisect-go-rclone.sh", + "VHbmHzHh4taXzgag8BAIKQ== bisect-rclone.sh", + ] + } + +See the [hashsum](/commands/rclone_hashsum/) command for more information on the above. + +**Authentication is required for this call.** + ### operations/list: List the given remote and path in JSON format {#operations-list} This takes the following parameters: @@ -1764,7 +1822,9 @@ This takes the following parameters - ignoreListingChecksum - Do not use checksums for listings - resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync. Use at your own risk! -- workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync) +- workdir - server directory for history files (default: `~/.cache/rclone/bisync`) +- backupdir1 - --backup-dir for Path1. Must be a non-overlapping path on the same remote. +- backupdir2 - --backup-dir for Path2. Must be a non-overlapping path on the same remote. - noCleanup - retain working files See [bisync command help](https://rclone.org/commands/rclone_bisync/) diff --git a/docs/content/s3.md b/docs/content/s3.md index cfb5a7659..eb9089406 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -1366,10 +1366,10 @@ Properties: #### --s3-upload-concurrency -Concurrency for multipart uploads. +Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded -concurrently. +concurrently for multipart uploads and copies. If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing @@ -1418,6 +1418,19 @@ Properties: - Type: bool - Default: false +#### --s3-use-dual-stack + +If true use AWS S3 dual-stack endpoint (IPv6 support). + +See [AWS Docs on Dualstack Endpoints](https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html) + +Properties: + +- Config: use_dual_stack +- Env Var: RCLONE_S3_USE_DUAL_STACK +- Type: bool +- Default: false + #### --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint. @@ -1722,6 +1735,25 @@ Properties: - Type: Time - Default: off +#### --s3-version-deleted + +Show deleted file markers when using versions. + +This shows deleted file markers in the listing when using versions. These will appear +as 0 size files. The only operation which can be performed on them is deletion. + +Deleting a delete marker will reveal the previous version. + +Deleted files will always show with a timestamp. + + +Properties: + +- Config: version_deleted +- Env Var: RCLONE_S3_VERSION_DELETED +- Type: bool +- Default: false + #### --s3-decompress If set this will decompress gzip encoded objects. @@ -1872,6 +1904,17 @@ Properties: - Type: Tristate - Default: unset +#### --s3-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_S3_DESCRIPTION +- Type: string +- Required: false + ### Metadata User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case. diff --git a/docs/content/seafile.md b/docs/content/seafile.md index e3e1aa109..57a00ea30 100644 --- a/docs/content/seafile.md +++ b/docs/content/seafile.md @@ -389,5 +389,16 @@ Properties: - Type: Encoding - Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 +#### --seafile-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SEAFILE_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} diff --git a/docs/content/sftp.md b/docs/content/sftp.md index a32564c76..6a1e178d8 100644 --- a/docs/content/sftp.md +++ b/docs/content/sftp.md @@ -1042,6 +1042,17 @@ Properties: - Type: bool - Default: false +#### --sftp-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SFTP_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md index 3dd1027f9..8dde24c86 100644 --- a/docs/content/sharefile.md +++ b/docs/content/sharefile.md @@ -303,6 +303,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --sharefile-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SHAREFILE_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/sia.md b/docs/content/sia.md index 0ee8cba94..558c46c8a 100644 --- a/docs/content/sia.md +++ b/docs/content/sia.md @@ -194,6 +194,17 @@ Properties: - Type: Encoding - Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot +#### --sia-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SIA_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/smb.md b/docs/content/smb.md index 10eaec518..2a50a5c5a 100644 --- a/docs/content/smb.md +++ b/docs/content/smb.md @@ -248,4 +248,15 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --smb-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SMB_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} diff --git a/docs/content/storj.md b/docs/content/storj.md index 6427f009b..a377ffb87 100644 --- a/docs/content/storj.md +++ b/docs/content/storj.md @@ -293,6 +293,21 @@ Properties: - Type: string - Required: false +### Advanced options + +Here are the Advanced options specific to storj (Storj Decentralized Cloud Storage). + +#### --storj-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_STORJ_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Usage diff --git a/docs/content/sugarsync.md b/docs/content/sugarsync.md index 0b19927d7..6e6738aec 100644 --- a/docs/content/sugarsync.md +++ b/docs/content/sugarsync.md @@ -272,6 +272,17 @@ Properties: - Type: Encoding - Default: Slash,Ctl,InvalidUtf8,Dot +#### --sugarsync-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SUGARSYNC_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/swift.md b/docs/content/swift.md index a627e3684..d88802227 100644 --- a/docs/content/swift.md +++ b/docs/content/swift.md @@ -587,6 +587,17 @@ Properties: - Type: Encoding - Default: Slash,InvalidUtf8 +#### --swift-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SWIFT_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/union.md b/docs/content/union.md index 99b5f817d..0ae6da871 100644 --- a/docs/content/union.md +++ b/docs/content/union.md @@ -287,6 +287,17 @@ Properties: - Type: SizeSuffix - Default: 1Gi +#### --union-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_UNION_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. diff --git a/docs/content/uptobox.md b/docs/content/uptobox.md index 816337330..98ff5b714 100644 --- a/docs/content/uptobox.md +++ b/docs/content/uptobox.md @@ -146,6 +146,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot +#### --uptobox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_UPTOBOX_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/webdav.md b/docs/content/webdav.md index 2ae387694..7da008014 100644 --- a/docs/content/webdav.md +++ b/docs/content/webdav.md @@ -272,6 +272,28 @@ Properties: - Type: SizeSuffix - Default: 10Mi +#### --webdav-owncloud-exclude-shares + +Exclude ownCloud shares + +Properties: + +- Config: owncloud_exclude_shares +- Env Var: RCLONE_WEBDAV_OWNCLOUD_EXCLUDE_SHARES +- Type: bool +- Default: false + +#### --webdav-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_WEBDAV_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Provider notes diff --git a/docs/content/yandex.md b/docs/content/yandex.md index d8be56006..c723b839c 100644 --- a/docs/content/yandex.md +++ b/docs/content/yandex.md @@ -209,6 +209,17 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,InvalidUtf8,Dot +#### --yandex-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_YANDEX_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Limitations diff --git a/docs/content/zoho.md b/docs/content/zoho.md index b9ecdd8cd..489b8d8e0 100644 --- a/docs/content/zoho.md +++ b/docs/content/zoho.md @@ -237,6 +237,17 @@ Properties: - Type: Encoding - Default: Del,Ctl,InvalidUtf8 +#### --zoho-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_ZOHO_DESCRIPTION +- Type: string +- Required: false + {{< rem autogenerated options stop >}} ## Setting up your own client_id diff --git a/fstest/testserver/init.d/TestFTPVsftpdTLS b/fstest/testserver/init.d/TestFTPVsftpdTLS new file mode 100755 index 000000000..7131b7c46 --- /dev/null +++ b/fstest/testserver/init.d/TestFTPVsftpdTLS @@ -0,0 +1,26 @@ +#!/bin/bash + +set -e + +NAME=vsftpdtls +USER=rclone +PASS=TiffedRestedSian4 + +. $(dirname "$0")/docker.bash + +start() { + docker run --rm -d --name $NAME \ + -e "FTP_USER=rclone" \ + -e "FTP_PASS=$PASS" \ + rclone/vsftpd + + echo type=ftp + echo host=$(docker_ip) + echo user=$USER + echo pass=$(rclone obscure $PASS) + echo writing_mdtm=true + echo encoding=Ctl,LeftPeriod,Slash + echo _connect=$(docker_ip):21 +} + +. $(dirname "$0")/run.bash diff --git a/graphics/logo/logo_on_light/logo_on_light__vertical_color_800px_2to1.png b/graphics/logo/logo_on_light/logo_on_light__vertical_color_800px_2to1.png new file mode 100644 index 0000000000000000000000000000000000000000..9ab8c6fa8714517ab92a9f6830c24cc12d75bade GIT binary patch literal 23388 zcmeFYbySqyyEi^Tw^GtAA`L@#m!wEZcQbUupa`fSND2~4s64b%k^@SqfRr>00xHrd z43h62p67ec_pIOh&spodfBe=u#!~Nl?`vQ2xpLp$H!{#7Ct)OkKp^Bg+Uh0{2%H@P zfu<1=fG1`=cnAoDv?jvTBFMxclpPu1=j!3@!X6ZkbYXW1^KgYg!e+{zc%?JRP{g1A zqKU-oP)7AFi97n$_AOHK1P?dQKVlfx_RX^Hxt`vy&iNd9hFxSjdsk7KuSVPO<@Uoi z&l&~F4vVPv-e0+n(TYXpkLucn&yb40Q)_ZdD8tDgezEQQ?SB^iu1}P_W1xuDc|F?U zEbRFtmSp4O-34k|yGtJ*9y@7l6K0*Xq%BZhsqyLgqJQ3YFWj~dds-v1Lw~YdJ$IV_ zX|kc_yhitTqln?apG;HEqOvx6qm!C=vBTowm!cxOtQv*lWl@#!s3`iE!}Mbb>v~m- z6i3jkVL!XiM-b^Kjf!vBv^}2w&0mL`f6`VWO$$GtdwlaOLi|}eDHm?McgS;OxU=1O zypd~@JVO*5>G#v=;jmOMRBOLnu(qBqOKIuSQJb8fQz1TWh_(NoZTU+wYpIF)F>R@W z@6Dr*_a%zbvV>)7tP3;D5ZJZ{%jTyAzF$T^3K8_L$F*mlyv_c7V|JNj*t_?WoG5~q zHM@DWX>lpkuX*w&2Ty8QyJJsFe@q`o_?3sTVr1&0(S)}OA+c?p9n6!kJ70|tXx8vT zI#@+{PM_ngLR&0<8#Gp40$|(vPf1@I5A%enK%_A>Q)7st%@Kqfu@{uRK2J4qkGxJ)hY=e zp$BRYzdS3}zop+w!EdHtnVVR6I z!*$Pw|I#zN-QvA35whS?!q)0D-+0W*%knDd=i{+Qta7Kxq!YfxO}1}0e>Ti+AF6MX zuN!VJDhsw7f21@~G*U8NuKd}@mHzR*v@1lDR)@Ws0vbQ`$gGw#DwYKb*=<3fp-u_0Sr{-#=W$J^CcE z6St0Z8h=@bQ-9rz^YeP)uMx?u`qur5I7{b!2f<;L55F*uT9p;HuA5wT3jQ|BqHi~z zO6p}+dDPkH&u`l@x~AVcn7(@admx3KNHu&o;%LISMP2fN&>gGJ&p(_GU{w9x%v4x?y^Pw{a<6!lQ!$7 z6M;zS<16a5B@bmLy=jKqL*2{LUkj$kTr$kF@l9GOlUyG~KwDh5ns@svKPl+aDad&+ zI~qx|e6lGXeK96CohV6TGMG;r5ZX#xcNf3Ks34*3O|6x|uXXYi&t{nEcBbn(YZlU; zBb`%Mh-OETU-HCeL)qirmDM>{mK80(FW8kchbOP1zt4R7AwyXuuR(F-(7N~j4rX)h zz^RJFI55qo-`&)aq7u;4!128mqn5{1YA-tyfbIICjat9rBUJR z)b!insTr>?+0TTJT8^)MsS&kgcEgJ@xZ_#6e?Ud?Tfo5H$*J16D&q!u@z>UwpooRK z4-TEPB`&O&X?z=uD6VTWuF=@6m@HoBhVzX{Inwl%=v&PDX{@I@CB=$jo;oEm(i$@r zaLw3A@Djs@b#3m3RZS80vv>K}Rt6T1G^%`zv{t;P9m+zL8FpKEOO4<<{Cz_K6WIqh zcX6sH<^{%ZeF>>&_>J?L-=5vgyxIkQR3Z7;S~ZBbmy1uATBSQgIVtUvoUkY(mRTo+ z&#j<>iKEoSxaN!T><`<(P`v2!Zr7Irt{o4|nAeu7-x7Yc3414$BsNO1_OPBUVeaJ|B%QbtR%O6Zw6^(a!iIophjt6J*Ob8K4 zeo(Or&d6nXaMStD5B;}u8v<)smqUzRo5*kHzigve9s9{kw?!WosFe6mQLmZ?Si9wn zy=$R$nSj30zMrwKDI;*RRJegi=YW=T?2b^T0%>u9>kBa|?kjf6>1ug#sEPI^mVMNx z8GW}@7bAAvTnL-XYSg7BP2bxXUEVC$*(^H80=&8CtJeYr?kqyD@dt9; z345qW@4w;oCOKPb|BY<3{tsG%L7Krhh9|%43+5_Kn=Zv|q;b@f(pJ+Zzl*yjM$qKs zN!6&s;ZebRr{UE~VZVsjYF!u=gOvh`;DpFO%Pa6v{3lR>mm2jNkAmQs>Hj*wyMla|*(Fwv1pT=>Imbe6 zUB3iRJO64%4T?R(o&|d&*8|Sdd#^hzSs^@?P&#eNTY0^Hjw~u&c;OsoP0b8%<(|`P zhrfG^rq&IaP?Iobw&6|ru;Cu4s?U!2)~%Z0a9Hrdf7ops8M znrs2KNYbSJUL3)?!e`!SR!|xUXCJcOWKMd8@pLzQ>8vg3y*(ttkjG`*tmreRo}EFg zPH6lbl3;u-im;)V?aRHu#{^Gb4Xd#c;;p;8@UZOF37_mVJ~-xDeN_-Y-~Ox#4mEzR zsn8Rk@!~4WwXI3PYqaWo{I9iMHr!VC&JvZ~)zBGe8J9 z6K!;3+0rp=S);JcMk1Ld!EBr=_aFWsRzEWBe&?G~BA7le+EsR}7P^RT1y_ ziBpMH8mq0HMM`0?x0}L zkAAG6{Z5&=$L?ksXBI^Wm4ClcL1oedSx?#rMjYcLRkBa4k2nw>+9uY__>*_;vOT;- zNbhFDVvVn|1Ajl!qcftRdtzteerdVOUuj`Ja;_a7ACc=})_3jMT4S!k)%V`M8f5L9 z@$8&Wx|2Q$IGLrJ?6BBjiFWpdhTfbEpUOmxxAmY<+Vbzn#;)4(7S7d^4;{ZZHcP+t zHSnuLhLHA2e{j#T)R0CFej$T|_?pbuYjqahPGu>U1LH}&Nlt&gxY?1mZ?W3eW65uB z`K$2c-cnmf+xy@=xV9toVH&%t3N6#McbfTT1vI1{GlqwU$2OhnLienh>{p&>;WIi< zQl1fZwl@!xT6#=*oh>;~oohvR^C-@>1Zm&^KaN^9yicM+hN=LtwM?h@z};DHSC@bhKI(R6V13l37?nnO+37#_ zBZC9HFT`=eOYxsr(hg=I_e6XxbtP5{hU0UWiS3JBaLu$l5~<3 zlo1z~5kyEiI0#BhAY=rkon<7P5MrWY64E07MoPywFv!8z$puFW5Et?QbR-b2l9H05 zj)F2$64HVQXAwt1M;8ZIK^aFW2}y*cBtk+`>fcBh26zBeI(Yv(S2$A6fRvQDw6n8_ zh_fKVRmK&N5)&14Z~@G@N;-;&ARGW$gv3!K7b zRg82LIK_niX8hubk+(yTE7+jGdBejuIP5={ntJ%S+zfKSX;V~ET3T9ETv}2LAtNO! zBmSR@%v=Hj0TOXEMMZ?f|E7yGi!2ZZDAoaoQ$XNi2arWpEx^Sg$S=Uu&(B+d6UP8M zZs!HO+2#K>ind1}SP_mx{6Cn#>Ei$Qzy5XsZ;y*t?Cck?m346X`;b6~5Eti*gTTJO zUpcuu_`10O|NaM1|9bB6e=(M$h!jE;AtE9u?kFxLh>#YM5tMP3k`k1Z6p?avbrf@z zk#P90)C2uogF+nwTvXhEq(B+~pbKf(uU)8;@4v1Nb$7w(NkmLsP((^lMA}qTR8~St z79qwjA}T8)!YTZZz`{7M{+Gn^!v7zg$X_h@w=e+q{rwDt7Z9z4|07)eqcfZu{~!MN z#}@w&PXMI<_mKaVe*afp|EsS5mInS?i~kp0|EsS5mInS?i~kp0|6}SR`Ok*O#TQgT zp`e{19oh*6tq_5uo|Zb~9QX5C`^yaQgczx983=(0ap8VL!^)IGz(c|y9eoYL?}S7o zGS}V9>lYyqc8HF;ifP!)=6v`Qu3PQryI7H%)K{KOhr&23?tGwiPwNSRst~c`8C}B1 zuP(N||4fzZI{W8m*PvZ&M6aH`CK_~oV}s{`?CQV%5cOK}b&bOnchBrawp(Ft&*a4C z3v~awzCHPCg<)GiL-x;?@>0d}o8M8~I_dv|zs6OrTEcWj(Vus??Op68?Dr2BrOvU} z(Z$M;8!2plXMQO9xbLX%i+f>Y%}mX$rUK(tuek>xoH_+e(%JZuaLJVOc{~fwE`3FU zIaPPkOogL@pBbDd^%z%?ey17 zvf0oOf)M;h0unyoEiZHmnid^i=C~w-eRe@$jgkc)GlpJ3o1s_8I#Q%iA5r3QUVs8K zw({=R=Ys42L258+IjK4<=yfhy28{(L zavhLe@SMjapH29Py`|6T0qvhuDrxe=5O88ofOa}mJsnYpsiQKw|E}+=nX{=HItLt= zm93gi#-3ifdoMK<*2$e`U{3nY8lyaU>2c|AtNzQ{CfDEf<)Gxq*_;}AcT(e?FDVG# z<1WiESLWuk>2jjQ2-x4Q5Ov6bMdm!X*HV_o4Nu};+gx&D!ockLDn<}kc+oXlQmiG3 zjxJ%My^e`&cb_?mpPWsb{Q;W&MyT0rGhZbX=_4A9URDgjTa4^6CU+R(*4~D>=rQ$JT6LdE7!9Dta zUKlPp{gIAp`{-FZRG@4KbU1CIZL()gsGgoq$)*RHWKc9&k4F0uCM59QCb>A{%ey{R zv>%GRQuD89vR76Fe?Md)-mqs7TdIJoy(6?B(9zAt0(Mjh5)wk8N2is8nYrp?zii$9 zmdKKXZJXeXM8H)-?7N2j7_hjZ9*550cParL&zM7t%gy6Ek1h3}-zmt*4eMwwv?Cym zZ!s(8JkyJI$W}?u8|JV`A6Exw&BDzs?pqVZ4FVhc{B7gR-kmyWq*QI*M1M+aTnYe9 zZCP6BHzZLuw}fHQ5?81sSFSKFz?uh!C#>`I5j5+5fX3A!e)e?4 z?v62J2obe%)VhcFksSwQ-G^}0zJOYFG}7KZj2Aqk{$3NDj)3k?q!Py6s?^*%g@*wR zK5^mW_Bd&Saq3XNNnI*&Y9pPR6w@AkABL}<-pgmpcMQc@O|%g9SJA>kI3HKkLIWY@{_WSaynyKBue$Hnz^H0 z7D8FCCa2u#Pm8{2SpTB8257UNnYX|?vw)HCLUw}Ip!QK#4-XeZ>6yVe6V^@hm26hS*@S6HC+aI#Y_8MU3el1lUA6k; zA3@+{)sKw5a95Z2Lk!-(-k=i!N8}#f*5ww`Pu)K(ycD;XwNc>a_V8xD!=pp$4s`2} zkty!MzW!>AaRN$;BO4D@1FmzQYyul>Z)vWXQ;PqcmP*G8IqNDAHAm^4dv6D7be(Ps z5`t&@(I|zUpb`7|;zS+}y>~UPkXG)ujasT4D_89)+4U-tx4%B;&>JSPY|zcom&$2Wm5k z4nRy?vZEFkL9_|fnm%}Xh(HD=Zam^gP4oFK>Vh~Br>{@jE>q1P(7Q=N)^aA^IZsX}Cg~VA#u5=DS8G+wTC@ z!at7Dcyz)XL0$yp_50n;mA^qKsj5_C#COaw)f8=<4v^eQuPx3C&An+Ou_m=9ur|I< zSJ=Q!kk+)Ldw~d#Zr)6{_s^GGfKFR@JSsx}QQzBJ*Xd%&PiAyWd%poeK1jY?$^tTg zv^Xw?16lm<-v{)l22^ldFlrUmqZKC5JG%c7i1%E*$q6X0ksZ^LUkKs^*I!j@?X{7wGiZqxCeA-P) zD;){P#X)*$Gl-(tZu32K>>JSr$Y;TEbNQz=MOUa9-G~ z^RuesQu!#z-0CGRonJVgWt0i?rraRhfZQ1TGkf~p-tN!qdP!g4@*J3$Vy2@p?@5z7 z&X3I^ z*aq=$*m|%%@fW-f3g)8G1D5eYnby7lkiO~RISxc4jO!vB-~I+H7iSF9rj@2@rg&xA+PZb{BV&BPn%KtVWkM!EdK&|imwO~hr3R#|dXPQ{ zAtqJCPlE6V>jwW~dOGAa8&Xr#v&p60;&ANtc~O>Cv@wQfL2>0zIHnc7HnCN08KR(5 z@Lt$Xr+`LLQ4jHS)coXxX@ncIk{O>`Eq%?kOAkF*B0o9UkG(Af&nKB z7L6~P&UG=&_W1mxp9v!(jvlH7?i;T+!rB zkgB85ewqlCf>c;M7PX|6u1}dg7o~E3Wow@L`@C}er`{tD6r_^!q{^hi&yPOyV$;3U zRa8aN7RM8Bjowuzu)m@yq67pHHn=XsUt@wGySZn64`J3yv$}j}Q)YH%izRTEFrG*E1f(or{{2A?bnhCFxXs;Lis6-k;)O)5gCX?bw5p?zr?j@DNm z@kI1&fm7JLmkuZ(3kA=QK0$_9aIjFUBIXEl!_)xnE3_8le7;OL%G2sA)D5gg72`j7{@KMF6-CiZ_QX{p&sPoeOK@uF zG1`QoVlWqgqU!#vCG(G(@t+j@7t+(jr`EtbomYAm#+lD`_j*8OCk5%qtjn7Nra6-y zr3TD|_r{+0xxawX1N#haKEl~%ykn=1Z6vd5h%Ph#4B*widsh1K#-Q1kA3ytI{O#Qv zJ&37g-?s@DLA~m!&L+ZXjp%V3W=E z&v9}$NJe?%=O=naac^qfjb7EdOvZLIj#|tjXBnie-^PXf&ikSJ_^LPWfIlbSP!lbke+UhtGM29A_?5H6JK-@$!PE#N9RSuhYsENf@7y11!6}N{DWCBkL#r> z?ZQ_eufL(4^?1FV4LTdm2u|C$lYW|}_9o!l-q>uMZ~sY5h3!!DkBGgbJjmMi?MVP- z)OOrQag&@@RFOp3nYMj7O;aohQWLoJs|FnM2KzYBR@u*aZ1Fl~zxuF}wLOlTF$KFS zFBK*E?wy$sEWArVSBC`RY|N-(=yz^{ma0_$$Sex-TSpcXDBI3q2D;%kLV~oUlTu-4 z>h=Me2fL4}$vJT)OUW@~ZL^E}{m8hYMvvJ=!a@TNn*@*AY;gV}!<4MradPWk7t+0H9F*J^_j+g7L1Z_w1}4E2h%&WU{EvsoGHliNvKr_FUXelfuEq-aaN zIpIElEHy1+;46{+3shZ%s8I9PR@Xl|Tzup&EMPi(yQ|EJspfI#JT1`U1?eDpi7E5K z>8|5r`d!7vYZ^Wg4|*%|%=^Ec>a0T|p>D^bU>|5)$~S@D=?Pe&X8j|x5d?=UNWPt8 z0MGTUwQzDWazE&asD}QLDo7nYNdqNjcF;rKX%@~y_F996RTV)}t^t`P_#D(ZWpfu>#W_N*?<5Oc=(g%>Z$ zDY0zuQ?}~SPc@+mP#!+MT4kmJ$zh#0%FOJTr#N_379oI5P_ma-0bZpD&a#)9L~bAi z*Zo2C$eAy3k`h6jC}5H#C6`V>GF?kpD>bu|To{gYX&(23Q~)RzHcaYx@`7ya*);6Z zylDt2y0XE4J~%JUp#(X=f8aB9;H7Tv!1z~4r~+YBqe(>(X+CjfewVCaN@^3^^eT5l z#yZ&hv~b}J6_k9=mf8t@dPVVSwW)xB5s~)w6Hx0u$3I>644u4%zKZr?5X5#INaHY6 zu(6B|VDQFbCS*l2Sr56-N7P#y2>=7-CrAv%^F9Nht~G>f@wVl@0nu7; zFrp4fv99iwE1&t?#LvqwV_XPye*=)wsUZ88T^+BzcxOUn%w62$d z8VMgm>FktJwTT+r@JyfSx<|y*U~T9ZOTPmXk%14M{sa-m6q`qCJ{>$KN)|qot?3` z3B!pq&iW9D6V7;NU*=hBLHHY!if!KAt~SnWZ?g~htn7x>9)o=4sBA})xA!iL z#2&64ac_$i_dZ-wVRiF1A8)Xt0ME8dFy?7U(MFcL%>xb!s*ec5_Y zZ_X^8DvE$vDd3-p>>$hS@fJhe9uj{=OPNDsGglKqi{y>W`G@=OSwN9v=E}f*wNN%D zQ}qgfhi?T_)63KSqs+=}V73NUf__r0ntFafS)xZ@o==fs8?pVYfmG$u#KRbFTze;m zsvmX)6`&=op|n4=_n{_lF7(<&MjV@e(pE4e-1=az zBwc4xVL0H$3r>3qTQ6(NJ+B6Vcl~b(teHSTb@f!L*pREGPHrDK(i@a6L2GyGz?XJ@ zr-oR9nXorS+BoFM_AG*_Lo^^gIjBwambF>>?kOe-_`;0^*-(&ji4LO(@{Y`vE=CN& z517P0_wp&*C3qf@k1Ew~R%OE;4O<>s5J?MiE!O-k@3?67u(n+2+S=x#JSlk*P);pQ9zkaS-y(9Cb= z9cTiA-ynprK(Q&_M}Lj~yzM3u#SDuQSJRK7YaU3wm}|1QfRfk`lT9WOei>;1DjZD) zA`S^mK3T{^KD08>ngl<2_4vWvQ z+kE7#q#zEwi~A*hs9;4Hv#pg{8j|Ns%-wJkoB#$ynonbeYPap}WU@#h097y~Wvy#M zYySZHIp|g=O-#l(M^et%K6w@aBSrTbvg&NEfR=wV#2{g|gBOV-k^shjAcjA2zxP|< zm=S3RMB~6;Dj}Q8UD|;nTi9gMX9hvIo%sud9|w$%{Qy~KiVPTazXORnwXr#78!rSm z1R2M?|6^sp&K#7Z7guwuwKc(tS>BA`5$s?v>~ssr<#RC^Vo*IY#P7(z+ayMoPr@vY zn}#gq(cb@Z^^g3>hA|r`Z|syl5dS_xv+ORq%vY#m8}p4pNS9o~?Al`NF*niU%;UbD z04bI+UvYjThWdR_1C>w9pms*(+)v21As_=~m78R_!A+D;Tsz7p$u&7*%my7x4xXk2 zh@DbnDrA02av*t-k|H46lRUj5-48=t&~e42AtNrz9(7W0?kBl=fB3a>#SfASH%#7vSdduA##GV~re^i+ z90^0bQYB9+Oe$jWrOV(gWBllL|JWA7Cn|{ezeFaT9PHut;;*C6NA5f!ZhJF<9lC%X zooc^d+5zi4VKm4Kw&uocT4M$jJ7%+0m(d1YK1|Tu2CAL z#?3U)aoMj*ZV-03Fj}1pJGIFoW8Q*W+lM$77`VMIDrWu5GgUZ$1Dly!+D3#SF;h z`Tl*-*(3Ow3&T}seat#}p;ESc!nM`0XmoD2iV33(X?Ce8*ttm8JfnM~Kj>l_l)`{7u9pAT+ zj5+{`b1)zrQS*S9%a@}2u96w0lf%={9PR;P}HE&4||~6~dHr zB$b}!Vyni_T4(0>lgrXF@j=kdBnDQ|+KoxOKXXMos-Ab5ddkM1=SU!zV#EFL;z(!J z{#Asnwc#)9JScR17~|7Ap@bGMZag91z?tx)3C>?k0ytKvRXPAbOz!hGey9Yof? z481<2j5X6cO0aNgi~ljx72(QC%#%P=|MyEPtEsxKhl;VvI1%+4tfbsJhg=3*|QDLL9^z<6<J?)(-qXFS#>M*z)!=q;4|iJ^;8$A{_u9tpU*7^miPmx;v%txTsp5VIt0^||lBT%=rJIGv^G4M`* zYf?g|w~s^^zBHY*KWIPZD(OM3rhz#|yZKM~5uI>Su&RZ~;K1>7j{P;yj*opSMbz<| zI6K%PPW4&ingB`1C^iV&I+}497K6`BJIs6)xBfCn@P0Iej}3#+-Ys zkuc~x_%fs6777`vh;#-PfToJz3d&J;73XM-k-gMj=y^8_Iv4E+JW;rz;?nNtN?|y- zyI-m7HH3eMxG~CKT$VTybLv4NAw}@U{vEe({OE8r132OK=y)xtV?=fsFp$^no~Hts zJn1=dFrrok#RTpY_3nPTxb`I97q-A=d8z}K(Jg-Am1^~3hhNdfzy6#=XZ^@I z`2IEa``Mx)yb00-MoMlWo+$u@qv+!}3Dg8?;`>o3EuDAVfDHO$;KZRNMiH|m9W)0( z6EVno@%6)p2<2Qn0lLJ;J8yu z63I!&ePJ11Z(F1F%or*1Lv3c+5Qv%2$O|^Y>Lj+wdm>!$hU=GY^t}@pltYQI&Y0*N z)S|>L4`KI@zO1@Nr>y2hciSS0rKpe3cJmyG$2qc-*}R7L&u4|tuyyPtoRjd$#1Y~B zaE0Bc>c2^U8K~R!Zl1W*Kz_eZExG&SdUp+qR4r^o=xJ+5lP9xe2scR_Y14Ct3Yc9o zTMgk3VKF~fC#l0Lc*r{hrWhhgT;hA-=jn z-NA%Nnl&b~S{|hAjwwWmw3x&GLriC+7Nirxif`~xt-qVkeWn}T7k4vG3ntB1dE=Y{JRl59|(h@-^s{?H6_sadR| zLJy$f{%DZ@2UXGZUlq;C?f>L&7aCIA;N#ljOtFtJEqE#c()){nF3dtEsruR+d?T;x z$Pjvf&q{<@Vb43&UfT}UBN*}j#lE97a?UUQ0!9d<#2=|e^8~h47?9X=MONY{8Q3Tb z?;s&X1O3F9I`y%|Gbs_7<^3Y&qW%RYb!_#uf>U2B$Rdmt)}h~+C?dqznV1NZ=I-!N z^aGbC1htv%<|Y;4)Gg_I>^pIi*iD{q5dsY7tt2}!#dc?Atjk^o@EDdwz1jDzCdCwp@j-$ zshSO;NL6a2`GAg$*D-`%Mqua!aGhTJa0a3dL!Dv9x_QdQ(95=jTSGn3mfaArfdPp=RRuP)n&T1dNX zZyrJ#Yt8jQc&#GmdZTuzMifbXI`b#`o6>tve9VVu0~MaByWB|H;#)nT>*)DL?va|& zz7$jfjY|LVp-*>RN`3+NX~?Y;iqe8wKYQp>WVdsj*B8p^SoH*Q}2A?eRe-?BHp@BLg3r|!A?Fr;jZ zLm~q9Y_CGU_hNH&K!EcMBzj9c#rj0yLlI57JoM2^@($^RN8k3Z)H)H>SUq@wZ6&{7 z4f71Ti8^D?(-)@9h7VD`Zg|j+gYqoZo))H*Aa?H_#z8+1hC7qF_5j zbxacsDf7pgxNSO{BQLF(pLg3$4|6570=Ji95ueouc_W%P&c|$-`VJyjC1_NsBE{zk z@djhJLdeG#;MQe?w#o5Bo}~CzXiaIY_j*Q6ZfOSt+JKcjJ#X+SahWKILj=T*p!&+H zVF-o&Qtj?1ODCC2bClB0QvCirKUDwiTb#9c&Z6{_X ztztgs2PM9Nhs6?6Mmi$~1(OvsrTA@;QXhcv7jCJK{jL`84txk>dvxwv%zNi4^kw7H z={%)1jnWpGgTY;8Nq$AqsC0Lm^tZE}1hs@ae8rK(d26qzVm!=myzf$fQ`In2fadYv zIgBicp~tqHC$Um~+CSfVNIyPHJeB3H`LbLvhAj~fX-L%hLB-A@-92wtQ0>aWXZHt# zYD9u}WpmYPOZ}4!lVfHvh9TanhZa|wpY`w>#=;vR3$)!!9CMz1TY>uS-)m4@8}|q4g^_>m()k?(*q>5Kk}dTk6`z9-?dIQbBWqHrUGzLyuuh_eMVatbewsz|cF3=j&~(fF-q`XWma0&v>$G2(UkTr|9!1 zjuw>=1iwCJm6DrLHoerr9!u46N%Q7(V@nJc!p3E_vh^7M(6Epr&ljr^Y4;0%1lzSM zI5mC>vZoT|tjJv}q9!;SN1mmSN5;UUY2!l}J-)$50bDqkpe8S5nmpWwWWzdgk*#(F zigc>6t!ma+=8-3gm{9Vw-xAMgge;6k4mXI~q?UZ4qSj&QAUSgt}O z2aT01*$zEEw>U)LMhCrA%Ds{JD^Jmvi5vecqRkxE6P^q76FaaR;(`CzU04W@l8yRE zcH6TsY%sNidf`*J|Bq-6jnit94b6-fpL{XR`31GxvT_Z&RC$ReteJReUuvwn?VSFWe(4LAr+U=> z`a?fW;kJ0$o_p)QnRl=uuv(Wp4nNe;mz=#%pVP?*^$#&><=orh4b8>ncHGy+mg-VF zk5)V|??pQ{bcqrBjbTLh5$U%sN6c6JGA*TYB|r++YGv3(^g!rH9732DXY(NkP+$1M zA(wONs3_92Hd*;hh1?XN>{qqzg?yN4)QW7 zl_JLWCY*ovJKZ7PDPOv!o?97B``}nyAa&HKzP~OK*wEt!$C5IJ{+EpHbq=ye%c~hy zl;~rn@t+xsF9WS>UhXj~cmE*F6o#E^BgF{`j1E5IE^ZkKQz!1at91R0I{Qt?kuN_q z$Sr#Evy3WX{;uclz_z$d_2d}CG$A!H$JqTX!d#qutY|A%+!t4)vv;*PQ16mr$mhZp zW6!*HIB{k1+g;TKgj)ho~>)RtaB-yX? z^b^|JZM>JG2-K^I)2Pc7U5pjhlwA>eM1Lf9-rz~?D)z?qSOBm4&@(Wl3iYN|m$FwR zK2fqS_;cb`13!jFCQ=u5h~02cTRj@@zZ8gEtL<^h$Pz(<0e0!g39%Od0!mn}T2V{-`2*uHypP=G-_ zLpQ&^x7X#dNFhJ$O?VB}yt{nWKus&>pMo=)`_n%QZ~S7SSn7f(Qd;MKzDfrbBD2pp zBnZRVt~(q0=tI|W*j@KWCmkZW)Ct_Z*-XmgKi&rzQ0=cuh$GW|As&8qZ@E!ya6+E? z`}?RO{GD8pxBP+WCjF7hrH|Xm@(C(gx+l9=4CpSugbdZ#VnY zuc&=JUY-^0hN`%~Rp-@Mu8zIirW8-FAYXO@KF^dLEJ3rXFE8ehEOBu}iN}AOrBWnv zpc|-UZ!c@L5>)(iYLxbAz~eT4t(7Wt<-WiKNBlW{M-k@g0`!D4HClUg7JMS26HeP) zO+H(I=S^QAmFymD?@2^iw&7em%_}`iRpr6ia2-`?5Oi}_DZs44hdOLf#yN1Qw*)m9 zS4AChS`c?Pj#^$F*b~8;nmCf9^AUgN7KWGnY{d;ams|Yjdqy+m3K#SSvBL`+V?h%3 z_e)BCQ0G=W+h5ZHtWP;UJxFw%r>M;T46+*LTRvN?7M0EE(4DqzQU>MH7c#U{u*30d z!-nvq6^SN(LsEa#D z%(~rmj(KjXsCu-0Ic%*Q?G|YImjZRf{Ko*Jr_`vA04px?CO|)#JsFj;kmJGc!!cLT zWK+<5R^z)`g};T6h<32402K;mCcApannYLN78a9ZK+HtJ{Wu&lx57MbfGZDl!Pc?D~;56dbvb{mUaDbOkaZ z3llRy?nL+zg(th89cikuQ4nLOeLNSL(_2Yi;}0aem><}v1iXtof?0@r;S<|aWAt<; z+Xv@Bo(@Jmv`}AJOed$!9@Z`FMU+zEcw&09khf691o8uEd?~Yz=3z zn}pIEV=0YNTZ$PWttF)YxwkxHo)KW`g8A+qI8cRgc zMhT`u7i6eJExA84_fNS0z&X!3&-;Dvci!*)jtQn;TXh(cf?ND>A7rY4&;RZ#y&_c+ zw=k{L_ja|KzFx5e1jrw&eS2KG@HJT@FL6cJw>-*M^PQDkN72Y7Gl#gw^q_e|3Z4uFmw znt_!7W<1bq`~hv>J;eDzoRio#kE)z79Mku6SBjnIk>>}a45OS2C$JBs(KVPJLm?XsIcUZD|9B#bxOSWj10_WgD9fxMwM)TZ`vJT3?My?N+SkAR@v8`D_q^G>_xcz*?T@f6wnIu9l*7vTuF;{L(#M zesuy7A;8n*_qDy7Txp{Y5C&n~>@4$kNBdwiFy(XEKhTjrcEAu?XC-dTgU%46xkF`^ zu^&?)414PDzIJ*h)C0K){5B%EU8NDSnhhFnmc`iJ(ix=;1u~kl(E6wwwqYUAF$RBA_JY)HW4sliL3Um%r_|_=8EZ zu@n}E-pu|7AbP`451@j(O{y#uoQl%$(6*KbEjs^e|%!(R@dtd!6EFZ61V4iqZ5K6LzpHC>3 z%Fa9IYa#>mcdv&%aVOZqlDyMC6qiT?p$K(}ogV(kibR)eejKaM# z*>V`0Da;Q-6;9OCU!BRQ3HAo88~JCMmGmvt)gLqNYK|=(^^N!JBV61E%TGF=(W^(> zs6$UMtCa@}#)pDKo$!=( z7Y6y}0};ng(yA;S8Axg!)PUZA9(a}4WUK<~{U|@XFEPd|$p-syrUB4p+|W)YCcm8; z0$mCB7Wd5xKGlfVzFyJD2*hm)?0~Y5saJP#w_3N9Gj~Ghf%HXrJoY;RM^^h?TPQ-Q z6r-jiEfJHUj*PA!(Vu;WmEYgN4urb^J*PEm1@Vjwz>22CH@-4LH*49EN0&8Te3qsq z%8eCKL<;q)9A#U}jbih*P}j8;pA@O9&oh98h+a|qvyi!KrLolx=UhL1+js0q5Z;H6 z6OH7^OS))k<>BN1Fx zVLv}kG-7Kii`8~`e@1U~g($Vr9b|E+6)}m4yY{wWqv{w7OCH{{1FyH3^%th53)VvI z#QLpyeY~73Pjq;yl30=HlMh%h z%WIO34}mwJ%a>L|O~3-#BSr75GeUD0xRQkI5To2A<&dt^x9-DO>L z;&K)^!&F$;q7R`$@PJXYNGY$5`G(Lr{HP#NLN<=jpSZsa?!2H6Y+g>c$?0)4zX$r3 zPH13zr=&`tz=V+9X6tFP+@w*4S(b-=(YR{w7W?UN@-0Gst1VeWY+N58C$OHX3rSw` zVX1pN01Cq+kz&mElj%L&{LVcNL>MKV+*-)s)Nv%zNpLdiaRo(PjHvgJV>Svx<|Y|N za3l{(4_%M!kkXQ=QD;|`dA(}snxNXC(*K5G?)YO?0DAY=I1)|s74J>YIbmJyCl&wZ z{@34NB#=F1TAi)rNPe$gF=r#;yzk)tgA4%Sqg(Wv`m7s?glkIz`7mTsf))=jg{SMN9g=w!?pKD0C z@xfv~j3$%-vZ*+>Bdr}B&n%RxQHp@=pQm6%!6+Z)Pb_=E)$L>0&Gmr4C*EF9;?4su zh`S341{S%*@nUWJ>FpWMY!1r%jK#MA@L5XjPcIy%L{FF%BsabQ_$mOu_b off\f[R]. +This can potentially cause data corruption if you do. +You can work around this by giving each rclone its own cache hierarchy +with \f[C]--cache-dir\f[R]. +You don\[aq]t need to worry about this if the remotes in use don\[aq]t +overlap. +.SS --vfs-cache-mode off +.PP +In this mode (the default) the cache will read directly from the remote +and write directly to the remote without caching anything on disk. +.PP +This will mean some operations are not possible +.IP \[bu] 2 +Files can\[aq]t be opened for both read AND write +.IP \[bu] 2 +Files opened for write can\[aq]t be seeked +.IP \[bu] 2 +Existing files opened for write must have O_TRUNC set +.IP \[bu] 2 +Files open for read with O_TRUNC will be opened write only +.IP \[bu] 2 +Files open for write only will behave as if O_TRUNC was supplied +.IP \[bu] 2 +Open modes O_APPEND, O_TRUNC are ignored +.IP \[bu] 2 +If an upload fails it can\[aq]t be retried +.SS --vfs-cache-mode minimal +.PP +This is very similar to \[dq]off\[dq] except that files opened for read +AND write will be buffered to disk. +This means that files opened for write will be a lot more compatible, +but uses the minimal disk space. +.PP +These operations are not possible +.IP \[bu] 2 +Files opened for write only can\[aq]t be seeked +.IP \[bu] 2 +Existing files opened for write must have O_TRUNC set +.IP \[bu] 2 +Files opened for write only will ignore O_APPEND, O_TRUNC +.IP \[bu] 2 +If an upload fails it can\[aq]t be retried +.SS --vfs-cache-mode writes +.PP +In this mode files opened for read only are still read directly from the +remote, write only and read/write files are buffered to disk first. +.PP +This mode should support all normal file system operations. +.PP +If an upload fails it will be retried at exponentially increasing +intervals up to 1 minute. +.SS --vfs-cache-mode full +.PP +In this mode all reads and writes are buffered to and from disk. +When data is read from the remote this is buffered to disk as well. +.PP +In this mode the files in the cache will be sparse files and rclone will +keep track of which bits of the files it has downloaded. +.PP +So if an application only reads the starts of each file, then rclone +will only buffer the start of the file. +These files will appear to be their full size in the cache, but they +will be sparse files with only the data that has been downloaded present +in them. +.PP +This mode should support all normal file system operations and is +otherwise identical to \f[C]--vfs-cache-mode\f[R] writes. +.PP +When reading a file rclone will read \f[C]--buffer-size\f[R] plus +\f[C]--vfs-read-ahead\f[R] bytes ahead. +The \f[C]--buffer-size\f[R] is buffered in memory whereas the +\f[C]--vfs-read-ahead\f[R] is buffered on disk. +.PP +When using this mode it is recommended that \f[C]--buffer-size\f[R] is +not set too large and \f[C]--vfs-read-ahead\f[R] is set large if +required. +.PP +\f[B]IMPORTANT\f[R] not all file systems support sparse files. +In particular FAT/exFAT do not. +Rclone will perform very badly if the cache directory is on a filesystem +which doesn\[aq]t support sparse files and it will log an ERROR message +if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. +.SS VFS Chunked Reading +.PP +When rclone reads files from a remote it reads them in chunks. +This means that rather than requesting the whole file rclone reads the +chunk specified. +This can reduce the used download quota for some remotes by requesting +only chunks from the remote that are actually read, at the cost of an +increased number of requests. +.PP +These flags control the chunking: +.IP +.nf +\f[C] +--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) +--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) +\f[R] +.fi +.PP +Rclone will start reading a chunk of size +\f[C]--vfs-read-chunk-size\f[R], and then double the size for each read. +When \f[C]--vfs-read-chunk-size-limit\f[R] is specified, and greater +than \f[C]--vfs-read-chunk-size\f[R], the chunk size for each open file +will get doubled only until the specified value is reached. +If the value is \[dq]off\[dq], which is the default, the limit is +disabled and the chunk size will grow indefinitely. +.PP +With \f[C]--vfs-read-chunk-size 100M\f[R] and +\f[C]--vfs-read-chunk-size-limit 0\f[R] the following parts will be +downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. +When \f[C]--vfs-read-chunk-size-limit 500M\f[R] is specified, the result +would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so +on. +.PP +Setting \f[C]--vfs-read-chunk-size\f[R] to \f[C]0\f[R] or \[dq]off\[dq] +disables chunked reading. +.SS VFS Performance +.PP +These flags may be used to enable/disable features of the VFS for +performance or other reasons. +See also the chunked reading feature. +.PP +In particular S3 and Swift benefit hugely from the +\f[C]--no-modtime\f[R] flag (or use \f[C]--use-server-modtime\f[R] for a +slightly different effect) as each read of the modification time takes a +transaction. +.IP +.nf +\f[C] +--no-checksum Don\[aq]t compare checksums on up/download. +--no-modtime Don\[aq]t read/write the modification time (can speed things up). +--no-seek Don\[aq]t allow seeking in files. +--read-only Only allow read-only access. +\f[R] +.fi +.PP +Sometimes rclone is delivered reads or writes out of order. +Rather than seeking rclone will wait a short time for the in sequence +read or write to come in. +These flags only come into effect when not using an on disk cache file. +.IP +.nf +\f[C] +--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) +--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +\f[R] +.fi +.PP +When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value +writes or full), the global flag \f[C]--transfers\f[R] can be set to +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). +.IP +.nf +\f[C] +--transfers int Number of file transfers to run in parallel (default 4) +\f[R] +.fi +.SS VFS Case Sensitivity +.PP +Linux file systems are case-sensitive: two files can differ only by +case, and the exact case must be used when opening a file. +.PP +File systems in modern Windows are case-insensitive but case-preserving: +although existing files can be opened using any case, the exact case +used to create the file is preserved and available for programs to +query. +It is not allowed for two files in the same directory to differ only by +case. +.PP +Usually file systems on macOS are case-insensitive. +It is possible to make macOS file systems case-sensitive but that is not +the default. +.PP +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone +handles these two cases. +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command +line), rclone may perform a \[dq]fixup\[dq] as explained below. +.PP +The user may specify a file name to open/delete/rename/etc with a case +different than what is stored on the remote. +If an argument refers to an existing file with exactly the same name, +then the case of the existing file on the disk will be used. +However, if a file name with exactly the same name is not found but a +name differing only by case exists, rclone will transparently fixup the +name. +This fixup happens only when an existing file is requested. +Case sensitivity of file names created anew by rclone is controlled by +the underlying remote. +.PP +Note that case sensitivity of the operating system running rclone (the +target) may differ from case sensitivity of a file system presented by +rclone (the source). +The flag controls whether \[dq]fixup\[dq] is performed to satisfy the +target. +.PP +If the flag is not provided on the command line, then its default value +depends on the operating system where rclone runs: \[dq]true\[dq] on +Windows and macOS, \[dq]false\[dq] otherwise. +If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi +.SS Alternate report of used bytes +.PP +Some backends, most notably S3, do not report the amount of bytes used. +If you need this information to be available when running \f[C]df\f[R] +on the filesystem, then pass the flag \f[C]--vfs-used-is-size\f[R] to +rclone. +With this flag set, instead of relying on the backend to report this +information, rclone will scan the whole remote similar to +\f[C]rclone size\f[R] and compute the total used space itself. +.PP +\f[I]WARNING.\f[R] Contrary to \f[C]rclone size\f[R], this flag ignores +filters so that the result is accurate. +However, this is very inefficient and may cost lots of API calls +resulting in extra charges. +Use it as a last resort and only with caching. +.IP +.nf +\f[C] +rclone nfsmount remote:path /path/to/mountpoint [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + --addr string IPaddress:Port or :Port to bind server to + --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows) + --allow-other Allow access to other users (not supported on Windows) + --allow-root Allow access to root user (not supported on Windows) + --async-read Use asynchronous reads (not supported on Windows) (default true) + --attr-timeout Duration Time for which file/directory attributes are cached (default 1s) + --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows) + --daemon-timeout Duration Time limit for rclone to respond to kernel (not supported on Windows) (default 0s) + --daemon-wait Duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s) + --debug-fuse Debug the FUSE internals - needs -v + --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows) + --devname string Set the device name - default is remote:path + --dir-cache-time Duration Time to cache directory entries for (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required) + --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) + -h, --help help for nfsmount + --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki) + --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset) + --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only) + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) + --no-checksum Don\[aq]t compare checksums on up/download + --no-modtime Don\[aq]t read/write the modification time (can speed things up) + --no-seek Don\[aq]t allow seeking in files + --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true) + --noapplexattr Ignore all \[dq]com.apple.*\[dq] extended attributes (supported on OSX only) + -o, --option stringArray Option for libfuse/WinFsp (repeat if required) + --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) + --read-only Only allow read-only access + --sudo Use sudo to run the mount command as root. + --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) + --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) + --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) + --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-refresh Refreshes the directory cache recursively in the background on start + --vfs-used-is-size rclone size Use the rclone size algorithm for Used size + --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) + --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) + --volname string Set the volume name (supported on Windows and OSX only) + --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SH SEE ALSO +.IP \[bu] 2 +rclone (https://rclone.org/commands/rclone/) - Show help for rclone +commands, flags and backends. .SH rclone obscure .PP Obscure password for use in the rclone config file. @@ -8546,6 +9869,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -8600,6 +9949,7 @@ rclone serve dlna remote:path [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -8612,7 +9962,7 @@ rclone serve dlna remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -9091,6 +10441,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -9163,6 +10539,7 @@ rclone serve docker [flags] --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -9175,7 +10552,7 @@ rclone serve docker [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -9626,6 +11003,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -9667,7 +11070,7 @@ STDOUT. ignored. .PP There is an example program -bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. .PP The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on @@ -9776,6 +11179,7 @@ rclone serve ftp remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default \[dq]anonymous\[dq]) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -9788,7 +11192,7 @@ rclone serve ftp remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -10451,6 +11855,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -10492,7 +11922,7 @@ STDOUT. ignored. .PP There is an example program -bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. .PP The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on @@ -10610,6 +12040,7 @@ rclone serve http remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -10622,7 +12053,7 @@ rclone serve http remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -10691,6 +12122,12 @@ Modifying files through NFS protocol requires VFS caching. Usually you will need to specify \f[C]--vfs-cache-mode\f[R] in order to be able to write to the mountpoint (full is recommended). If you don\[aq]t specify VFS cache mode, the mount will be read-only. +Note also that \f[C]--nfs-cache-handle-limit\f[R] controls the maximum +number of cached file handles stored by the caching handler. +This should not be set too low or you may experience errors when trying +to access files. +The default is \f[C]1000000\f[R], but consider lowering this limit if +the server\[aq]s system resource usage causes problems. .PP To serve NFS over the network use following command: .IP @@ -11096,6 +12533,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -11139,6 +12602,7 @@ rclone serve nfs remote:path [flags] --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for nfs + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) --no-checksum Don\[aq]t compare checksums on up/download --no-modtime Don\[aq]t read/write the modification time (can speed things up) --no-seek Don\[aq]t allow seeking in files @@ -11146,6 +12610,7 @@ rclone serve nfs remote:path [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -11158,7 +12623,7 @@ rclone serve nfs remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -12009,6 +13474,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -12072,6 +13563,7 @@ rclone serve s3 remote:path [flags] --server-write-timeout Duration Timeout for server writing data (default 1h0m0s) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -12084,7 +13576,7 @@ rclone serve s3 remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -12575,6 +14067,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -12616,7 +14134,7 @@ STDOUT. ignored. .PP There is an example program -bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. .PP The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on @@ -12725,6 +14243,7 @@ rclone serve sftp remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -12737,7 +14256,7 @@ rclone serve sftp remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -13433,6 +14952,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -13474,7 +15019,7 @@ STDOUT. ignored. .PP There is an example program -bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. .PP The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on @@ -13594,6 +15139,7 @@ rclone serve webdav remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -13606,7 +15152,7 @@ rclone serve webdav remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -14601,12 +16147,13 @@ rclone sync --interactive /path/to/files remote:current-backup .fi .SS Metadata support .PP -Metadata is data about a file which isn\[aq]t the contents of the file. +Metadata is data about a file (or directory) which isn\[aq]t the +contents of the file (or directory). Normally rclone only preserves the modification time and the content (MIME) type where possible. .PP -Rclone supports preserving all the available metadata on files (not -directories) when using the \f[C]--metadata\f[R] or \f[C]-M\f[R] flag. +Rclone supports preserving all the available metadata on files and +directories when using the \f[C]--metadata\f[R] or \f[C]-M\f[R] flag. .PP Exactly what metadata is supported and what that support means depends on the backend. @@ -14614,6 +16161,9 @@ Backends that support metadata have a metadata section in their docs and are listed in the features table (https://rclone.org/overview/#features) (Eg local (https://rclone.org/local/#metadata), s3) .PP +Some backends don\[aq]t support metadata, some only support metadata on +files and some support metadata on both files and directories. +.PP Rclone only supports a one-time sync of metadata. This means that metadata will be synced from the source object to the destination object only when the source object has changed and needs to @@ -14636,6 +16186,15 @@ This flag can be repeated as many times as necessary. The --metadata-mapper flag can be used to pass the name of a program in which can transform metadata when it is being copied from source to destination. +.PP +Rclone supports \f[C]--metadata-set\f[R] and \f[C]--metadata-mapper\f[R] +when doing sever side \f[C]Move\f[R] and server side \f[C]Copy\f[R], but +not when doing server side \f[C]DirMove\f[R] (renaming a directory) as +this would involve recursing into the directory. +Note that you can disable \f[C]DirMove\f[R] with +\f[C]--disable DirMove\f[R] and rclone will revert back to using +\f[C]Move\f[R] for each individual object where \f[C]--metadata-set\f[R] +and \f[C]--metadata-mapper\f[R] are supported. .SS Types of metadata .PP Metadata is divided into two type. @@ -15485,6 +17044,28 @@ data was copied, or skipping if not. NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly! +.SS --fix-case +.PP +Normally, a sync to a case insensitive dest (such as macOS / Windows) +will not result in a matching filename if the source and dest filenames +have casing differences but are otherwise identical. +For example, syncing \f[C]hello.txt\f[R] to \f[C]HELLO.txt\f[R] will +normally result in the dest filename remaining \f[C]HELLO.txt\f[R]. +If \f[C]--fix-case\f[R] is set, then \f[C]HELLO.txt\f[R] will be renamed +to \f[C]hello.txt\f[R] to match the source. +.PP +NB: - directory names with incorrect casing will also be fixed - +\f[C]--fix-case\f[R] will be ignored if \f[C]--immutable\f[R] is set - +using \f[C]--local-case-sensitive\f[R] instead is not advisable; it will +cause \f[C]HELLO.txt\f[R] to get deleted! - the old dest filename must +not be excluded by filters. +Be especially careful with +\f[C]--files-from\f[R] (https://rclone.org/filtering/#files-from-read-list-of-source-file-names), +which does not respect +\f[C]--ignore-case\f[R] (https://rclone.org/filtering/#ignore-case-make-searches-case-insensitive)! +- on remotes that do not support server-side move, \f[C]--fix-case\f[R] +will require downloading the file and re-uploading it. +To avoid this, do not use \f[C]--fix-case\f[R]. .SS --fs-cache-expire-duration=TIME .PP When using rclone via the API rclone caches created remotes for 5 @@ -15972,15 +17553,15 @@ being copied to .IP \[bu] 2 \f[C]DstFsType\f[R] is the name of the destination backend. .IP \[bu] 2 -\f[C]Remote\f[R] is the path of the file relative to the root. +\f[C]Remote\f[R] is the path of the object relative to the root. .IP \[bu] 2 \f[C]Size\f[R], \f[C]MimeType\f[R], \f[C]ModTime\f[R] are attributes of -the file. +the object. .IP \[bu] 2 \f[C]IsDir\f[R] is \f[C]true\f[R] if this is a directory (not yet implemented). .IP \[bu] 2 -\f[C]ID\f[R] is the source \f[C]ID\f[R] of the file if known. +\f[C]ID\f[R] is the source \f[C]ID\f[R] of the object if known. .IP \[bu] 2 \f[C]Metadata\f[R] is the backend specific metadata as described in the backend docs. @@ -16061,7 +17642,7 @@ json.dump(o, sys.stdout, indent=\[dq]\[rs]t\[dq]) .PP You can find this example (slightly expanded) in the rclone source code at -bin/test_metadata_mapper.py (https://github.com/rclone/rclone/blob/master/test_metadata_mapper.py). +bin/test_metadata_mapper.py (https://github.com/rclone/rclone/blob/master/bin/test_metadata_mapper.py). .PP If you want to see the input to the metadata mapper and the output returned from it in the log you can use \f[C]-vv --dump mapper\f[R]. @@ -16122,7 +17703,7 @@ Capable backends are marked in the overview (https://rclone.org/overview/#optional-features) as \f[C]MultithreadUpload\f[R]. (They need to implement either the \f[C]OpenWriterAt\f[R] or -\f[C]OpenChunkedWriter\f[R] internal interfaces). +\f[C]OpenChunkWriter\f[R] internal interfaces). These include include, \f[C]local\f[R], \f[C]s3\f[R], \f[C]azureblob\f[R], \f[C]b2\f[R], \f[C]oracleobjectstorage\f[R] and \f[C]smb\f[R] at the time of writing. @@ -16245,6 +17826,10 @@ remote files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also (e.g. the Google Drive client). +.SS --no-update-dir-modtime +.PP +When using this flag, rclone won\[aq]t update modification times of +remote directories if they are incorrect as it would normally. .SS --order-by string .PP The \f[C]--order-by\f[R] flag controls the order in which files in the @@ -17502,7 +19087,7 @@ For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): - rclone authorize \[dq]amazon cloud drive\[dq] + rclone authorize \[dq]dropbox\[dq] Then paste the result below: result> @@ -17513,7 +19098,7 @@ Then on your main desktop machine .IP .nf \f[C] -rclone authorize \[dq]amazon cloud drive\[dq] +rclone authorize \[dq]dropbox\[dq] If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... @@ -18446,7 +20031,7 @@ for an alternative \f[C]filter-file.txt\f[R]: .PP Files \f[C]file1.jpg\f[R], \f[C]file3.png\f[R] and \f[C]file2.avi\f[R] are listed whilst \f[C]secret17.jpg\f[R] and files without the suffix -\&.jpg\f[C]or\f[R].png\[ga] are excluded. +\f[C].jpg\f[R] or \f[C].png\f[R] are excluded. .PP E.g. for an alternative \f[C]filter-file.txt\f[R]: @@ -19619,6 +21204,32 @@ password (https://rclone.org/commands/rclone_config_password/) command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] +.SS config/paths: Reads the config file path and other important paths. +.PP +Returns a JSON object with the following keys: +.IP \[bu] 2 +config: path to config file +.IP \[bu] 2 +cache: path to root of cache directory +.IP \[bu] 2 +temp: path to root of temporary directory +.PP +Eg +.IP +.nf +\f[C] +{ + \[dq]cache\[dq]: \[dq]/home/USER/.cache/rclone\[dq], + \[dq]config\[dq]: \[dq]/home/USER/.rclone.conf\[dq], + \[dq]temp\[dq]: \[dq]/tmp\[dq] +} +\f[R] +.fi +.PP +See the config paths (https://rclone.org/commands/rclone_config_paths/) +command for more information on the above. +.PP +\f[B]Authentication is required for this call.\f[R] .SS config/providers: Shows how providers are configured in the config file. .PP Returns a JSON object: - providers - array of objects @@ -20573,6 +22184,63 @@ instead: rclone rc --loopback operations/fsinfo fs=remote: \f[R] .fi +.SS operations/hashsum: Produces a hashsum file for all the objects in the path. +.PP +Produces a hash file for all the objects in the path using the hash +named. +The output is in the same format as the standard md5sum/sha1sum tool. +.PP +This takes the following parameters: +.IP \[bu] 2 +fs - a remote name string e.g. +\[dq]drive:\[dq] for the source, \[dq]/\[dq] for local filesystem +.RS 2 +.IP \[bu] 2 +this can point to a file and just that file will be returned in the +listing. +.RE +.IP \[bu] 2 +hashType - type of hash to be used +.IP \[bu] 2 +download - check by downloading rather than with hash (boolean) +.IP \[bu] 2 +base64 - output the hashes in base64 rather than hex (boolean) +.PP +If you supply the download flag, it will download the data from the +remote and create the hash on the fly. +This can be useful for remotes that don\[aq]t support the given hash or +if you really want to check all the data. +.PP +Note that if you wish to supply a checkfile to check hashes against the +current files then you should use operations/check instead of +operations/hashsum. +.PP +Returns: +.IP \[bu] 2 +hashsum - array of strings of the hashes +.IP \[bu] 2 +hashType - type of hash used +.PP +Example: +.IP +.nf +\f[C] +$ rclone rc --loopback operations/hashsum fs=bin hashType=MD5 download=true base64=true +{ + \[dq]hashType\[dq]: \[dq]md5\[dq], + \[dq]hashsum\[dq]: [ + \[dq]WTSVLpuiXyJO_kGzJerRLg== backend-versions.sh\[dq], + \[dq]v1b_OlWCJO9LtNq3EIKkNQ== bisect-go-rclone.sh\[dq], + \[dq]VHbmHzHh4taXzgag8BAIKQ== bisect-rclone.sh\[dq], + ] +} +\f[R] +.fi +.PP +See the hashsum (https://rclone.org/commands/rclone_hashsum/) command +for more information on the above. +.PP +\f[B]Authentication is required for this call.\f[R] .SS operations/list: List the given remote and path in JSON format .PP This takes the following parameters: @@ -21042,7 +22710,13 @@ errors, instead of requiring resync. Use at your own risk! .IP \[bu] 2 workdir - server directory for history files (default: -/home/ncw/.cache/rclone/bisync) +\f[C]\[ti]/.cache/rclone/bisync\f[R]) +.IP \[bu] 2 +backupdir1 - --backup-dir for Path1. +Must be a non-overlapping path on the same remote. +.IP \[bu] 2 +backupdir2 - --backup-dir for Path2. +Must be a non-overlapping path on the same remote. .IP \[bu] 2 noCleanup - retain working files .PP @@ -21567,21 +23241,6 @@ T}@T{ - T} T{ -Amazon Drive -T}@T{ -MD5 -T}@T{ -- -T}@T{ -Yes -T}@T{ -No -T}@T{ -R -T}@T{ -- -T} -T{ Amazon S3 (or S3 compatible) T}@T{ MD5 @@ -21706,7 +23365,7 @@ Google Drive T}@T{ MD5, SHA1, SHA256 T}@T{ -R/W +DR/W T}@T{ No T}@T{ @@ -21714,7 +23373,7 @@ Yes T}@T{ R/W T}@T{ -- +DRWU T} T{ Google Photos @@ -21916,7 +23575,7 @@ Microsoft OneDrive T}@T{ QuickXorHash \[u2075] T}@T{ -R/W +DR/W T}@T{ Yes T}@T{ @@ -21924,7 +23583,7 @@ No T}@T{ R T}@T{ -- +DRW T} T{ OpenDrive @@ -22096,7 +23755,7 @@ SFTP T}@T{ MD5, SHA1 \[S2] T}@T{ -R/W +DR/W T}@T{ Depends T}@T{ @@ -22231,7 +23890,7 @@ The local filesystem T}@T{ All T}@T{ -R/W +DR/W T}@T{ Depends T}@T{ @@ -22239,7 +23898,7 @@ No T}@T{ - T}@T{ -RWU +DRWU T} .TE .PP @@ -22300,8 +23959,8 @@ Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. -some backends will only write a timestamp that represent the time of the -upload. +some backends will only write a timestamp that represents the time of +the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by @@ -22310,6 +23969,43 @@ default, though can be configured to check the file hash (with the Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it. .PP +.TS +tab(@); +lw(19.4n) lw(50.6n). +T{ +Key +T}@T{ +Explanation +T} +_ +T{ +\f[C]-\f[R] +T}@T{ +ModTimes not supported - times likely the upload time +T} +T{ +\f[C]R\f[R] +T}@T{ +ModTimes supported on files but can\[aq]t be changed without re-upload +T} +T{ +\f[C]R/W\f[R] +T}@T{ +Read and Write ModTimes fully supported on files +T} +T{ +\f[C]DR\f[R] +T}@T{ +ModTimes supported on files and directories but can\[aq]t be changed +without re-upload +T} +T{ +\f[C]DR/W\f[R] +T}@T{ +Read and Write ModTimes fully supported on files and directories +T} +.TE +.PP Storage systems with a \f[C]-\f[R] in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. @@ -22331,6 +24027,9 @@ ignored. .PP Storage systems with \f[C]R/W\f[R] (for read/write) in the ModTime column, means they do also support modtime-only operations. +.PP +Storage systems with \f[C]D\f[R] in the ModTime column means that the +following symbols apply to directories as well as files. .SS Case Insensitive .PP If a cloud storage systems is case sensitive then it is possible to have @@ -23107,7 +24806,7 @@ The levels of metadata support are .PP .TS tab(@); -l l. +lw(19.4n) lw(50.6n). T{ Key T}@T{ @@ -23117,17 +24816,34 @@ _ T{ \f[C]R\f[R] T}@T{ -Read only System Metadata +Read only System Metadata on files only T} T{ \f[C]RW\f[R] T}@T{ -Read and write System Metadata +Read and write System Metadata on files only T} T{ \f[C]RWU\f[R] T}@T{ -Read and write System Metadata and read and write User Metadata +Read and write System Metadata and read and write User Metadata on files +only +T} +T{ +\f[C]DR\f[R] +T}@T{ +Read only System Metadata on files and directories +T} +T{ +\f[C]DRW\f[R] +T}@T{ +Read and write System Metadata on files and directories +T} +T{ +\f[C]DRWU\f[R] +T}@T{ +Read and write System Metadata and read and write User Metadata on files +and directories T} .TE .PP @@ -23217,31 +24933,6 @@ T}@T{ Yes T} T{ -Amazon Drive -T}@T{ -Yes -T}@T{ -No -T}@T{ -Yes -T}@T{ -Yes -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -Yes -T} -T{ Amazon S3 (or S3 compatible) T}@T{ No @@ -23567,6 +25258,31 @@ T}@T{ Yes T} T{ +ImageKit +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T} +T{ Internet Archive T}@T{ No @@ -24294,7 +26010,7 @@ T} T{ The local filesystem T}@T{ -Yes +No T}@T{ No T}@T{ @@ -24435,7 +26151,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files + -I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -24449,6 +26165,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don\[aq]t check the destination, copy regardless --no-traverse Don\[aq]t traverse destination file system on copy + --no-update-dir-modtime Don\[aq]t update directory modification times --no-update-modtime Don\[aq]t update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq]) @@ -24469,6 +26186,7 @@ Flags just used for \f[C]rclone sync\f[R]. --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -24524,7 +26242,7 @@ General networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.65.0\[dq]) + --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.66.0\[dq]) \f[R] .fi .SS Performance @@ -24709,14 +26427,7 @@ These can be set in the config file also. .IP .nf \f[C] - --acd-auth-url string Auth server URL - --acd-client-id string OAuth Client Id - --acd-client-secret string OAuth Client Secret - --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi) - --acd-token string OAuth Access Token as a JSON blob - --acd-token-url string Token server url - --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s) + --alias-description string Description of the remote --alias-remote string Remote or path to alias --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name @@ -24727,6 +26438,8 @@ These can be set in the config file also. --azureblob-client-id string The ID of the client in use --azureblob-client-secret string One of the service principal\[aq]s client secrets --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth + --azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion + --azureblob-description string Description of the remote --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-disable-checksum Don\[aq]t store MD5 checksum with object metadata --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) @@ -24757,6 +26470,7 @@ These can be set in the config file also. --azurefiles-client-secret string One of the service principal\[aq]s client secrets --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth --azurefiles-connection-string string Azure Files Connection String + --azurefiles-description string Description of the remote --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot) --azurefiles-endpoint string Endpoint for the service --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI) @@ -24776,8 +26490,9 @@ These can be set in the config file also. --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) + --b2-description string Description of the remote --b2-disable-checksum Disable checksums for large (> upload cutoff) files - --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w) + --b2-download-auth-duration Duration Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-url string Custom endpoint for downloads --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service @@ -24796,6 +26511,7 @@ These can be set in the config file also. --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) + --box-description string Description of the remote --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) @@ -24812,6 +26528,7 @@ These can be set in the config file also. --cache-db-path string Directory to store file structure metadata DB (default \[dq]$HOME/.cache/rclone/cache-backend\[dq]) --cache-db-purge Clear all the cached data for this remote on start --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-description string Description of the remote --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) @@ -24825,15 +26542,19 @@ These can be set in the config file also. --cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-writes Cache file data on writes through the FS --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi) + --chunker-description string Description of the remote --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default \[dq]md5\[dq]) --chunker-remote string Remote to chunk/unchunk + --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining + --compress-description string Description of the remote --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default \[dq]gzip\[dq]) --compress-ram-cache-limit SizeSuffix Some remotes don\[aq]t allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress -L, --copy-links Follow symlinks and copy the pointed to item + --crypt-description string Description of the remote --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true) --crypt-filename-encoding string How to encode the encrypted filename to text string (default \[dq]base32\[dq]) --crypt-filename-encryption string How to encrypt the filenames (default \[dq]standard\[dq]) @@ -24844,6 +26565,7 @@ These can be set in the config file also. --crypt-remote string Remote to encrypt/decrypt --crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead --crypt-show-mapping For all files listed show how the names encrypt + --crypt-strict-names If set, this will raise an error when crypt comes across a filename that can\[aq]t be decrypted --crypt-suffix string If this is set it will override the default suffix of \[dq].bin\[dq] (default \[dq].bin\[dq]) --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded --drive-allow-import-name-change Allow the filetype to change when uploading Google docs @@ -24853,6 +26575,7 @@ These can be set in the config file also. --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut + --drive-description string Description of the remote --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) @@ -24901,6 +26624,7 @@ These can be set in the config file also. --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret + --dropbox-description string Description of the remote --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) @@ -24910,10 +26634,12 @@ These can be set in the config file also. --dropbox-token-url string Token server url --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-cdn Set if you wish to use CDN download links + --fichier-description string Description of the remote --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-shared-folder string If you want to download a shared folder, add this parameter + --filefabric-description string Description of the remote --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-permanent-token string Permanent Authentication Token --filefabric-root-folder-id string ID of the root folder @@ -24924,6 +26650,7 @@ These can be set in the config file also. --ftp-ask-password Allow asking for FTP password when needed --ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s) --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited + --ftp-description string Description of the remote --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) @@ -24949,6 +26676,7 @@ These can be set in the config file also. --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-decompress If set this will decompress gzip encoded objects + --gcs-description string Description of the remote --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-endpoint string Endpoint for the service @@ -24969,6 +26697,7 @@ These can be set in the config file also. --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s) --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret + --gphotos-description string Description of the remote --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-include-archived Also view and download archived media --gphotos-read-only Set to make the Google Photos backend read only @@ -24977,10 +26706,12 @@ These can be set in the config file also. --gphotos-token string OAuth Access Token as a JSON blob --gphotos-token-url string Token server url --hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default) + --hasher-description string Description of the remote --hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy + --hdfs-description string Description of the remote --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-namenode CommaSepList Hadoop name nodes and ports --hdfs-service-principal-name string Kerberos service principal name for the namenode @@ -24989,6 +26720,7 @@ These can be set in the config file also. --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) --hidrive-client-id string OAuth Client Id --hidrive-client-secret string OAuth Client Secret + --hidrive-description string Description of the remote --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot) --hidrive-endpoint string Endpoint for the service (default \[dq]https://api.hidrive.strato.com/2.1\[dq]) @@ -24999,10 +26731,12 @@ These can be set in the config file also. --hidrive-token-url string Token server url --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) + --http-description string Description of the remote --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don\[aq]t use HEAD requests --http-no-slash Set this if the site doesn\[aq]t end directories with / --http-url string URL of HTTP host to connect to + --imagekit-description string Description of the remote --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket) --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true @@ -25011,6 +26745,7 @@ These can be set in the config file also. --imagekit-upload-tags string Tags to add to the uploaded files, e.g. \[dq]tag1,tag2\[dq] --imagekit-versions Include old versions in directory listings --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-description string Description of the remote --internetarchive-disable-checksum Don\[aq]t ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-endpoint string IAS3 Endpoint (default \[dq]https://s3.us.archive.org\[dq]) @@ -25020,6 +26755,7 @@ These can be set in the config file also. --jottacloud-auth-url string Auth server URL --jottacloud-client-id string OAuth Client Id --jottacloud-client-secret string OAuth Client Secret + --jottacloud-description string Description of the remote --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -25028,6 +26764,7 @@ These can be set in the config file also. --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail\[aq]s (default 10Mi) + --koofr-description string Description of the remote --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use --koofr-mountid string Mount ID of the mount to use @@ -25035,10 +26772,12 @@ These can be set in the config file also. --koofr-provider string Choose your storage provider --koofr-setmtime Does the backend support setting modification time (default true) --koofr-user string Your user name + --linkbox-description string Description of the remote --linkbox-token string Token from https://www.linkbox.to/admin/account -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive + --local-description string Description of the remote --local-encoding Encoding The encoding for the backend (default Slash,Dot) --local-no-check-updated Don\[aq]t check to see if the files change during upload --local-no-preallocate Disable preallocation of disk space for transferred files @@ -25051,6 +26790,7 @@ These can be set in the config file also. --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-client-id string OAuth Client Id --mailru-client-secret string OAuth Client Secret + --mailru-description string Description of the remote --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) @@ -25061,12 +26801,15 @@ These can be set in the config file also. --mailru-token-url string Token server url --mailru-user string User name (usually email) --mega-debug Output more debug from Mega + --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash --mega-pass string Password (obscured) --mega-use-https Use HTTPS for transfers --mega-user string User name + --memory-description string Description of the remote --netstorage-account string Set the NetStorage account name + --netstorage-description string Description of the remote --netstorage-host string Domain+path of NetStorage host to connect to --netstorage-protocol string Select between HTTP or HTTPS protocol (default \[dq]https\[dq]) --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) @@ -25078,6 +26821,7 @@ These can be set in the config file also. --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-delta If set rclone will use delta listing to implement recursive listings + --onedrive-description string Description of the remote --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -25087,6 +26831,7 @@ These can be set in the config file also. --onedrive-link-scope string Set the scope of the links created by the link command (default \[dq]anonymous\[dq]) --onedrive-link-type string Set the type of the links created by the link command (default \[dq]view\[dq]) --onedrive-list-chunk int Size of listing chunk (default 1000) + --onedrive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off) --onedrive-no-versions Remove all versions on modifying operations --onedrive-region string Choose national cloud region for OneDrive (default \[dq]global\[dq]) --onedrive-root-folder-id string ID of the root folder @@ -25100,6 +26845,7 @@ These can be set in the config file also. --oos-config-profile string Profile name inside the oci config file (default \[dq]Default\[dq]) --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-timeout Duration Timeout for copy (default 1m0s) + --oos-description string Description of the remote --oos-disable-checksum Don\[aq]t store MD5 checksum with object metadata --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API @@ -25118,12 +26864,14 @@ These can be set in the config file also. --oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) + --opendrive-description string Description of the remote --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password (obscured) --opendrive-username string Username --pcloud-auth-url string Auth server URL --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret + --pcloud-description string Description of the remote --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default \[dq]api.pcloud.com\[dq]) --pcloud-password string Your pcloud password (obscured) @@ -25134,6 +26882,7 @@ These can be set in the config file also. --pikpak-auth-url string Auth server URL --pikpak-client-id string OAuth Client Id --pikpak-client-secret string OAuth Client Secret + --pikpak-description string Description of the remote --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-pass string Pikpak password (obscured) @@ -25146,11 +26895,13 @@ These can be set in the config file also. --premiumizeme-auth-url string Auth server URL --premiumizeme-client-id string OAuth Client Id --premiumizeme-client-secret string OAuth Client Secret + --premiumizeme-description string Description of the remote --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token-url string Token server url --protondrive-2fa string The 2FA code --protondrive-app-version string The app version string (default \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq]) + --protondrive-description string Description of the remote --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) @@ -25161,12 +26912,14 @@ These can be set in the config file also. --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret + --putio-description string Description of the remote --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-token string OAuth Access Token as a JSON blob --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) + --qingstor-description string Description of the remote --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-env-auth Get QingStor credentials from runtime @@ -25175,18 +26928,21 @@ These can be set in the config file also. --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to --quatrix-api-key string API key for accessing Quatrix account + --quatrix-description string Description of the remote --quatrix-effective-upload-time string Wanted upload time for one chunk (default \[dq]4s\[dq]) --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-host string Host name of Quatrix account --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than \[aq]transfers\[aq]*\[aq]minimal_chunk_size\[aq] (default 95.367Mi) --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) + --quatrix-skip-project-folders Skip project folders in operations --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --s3-decompress If set this will decompress gzip encoded objects + --s3-description string Description of the remote --s3-directory-markers Upload an empty object with a trailing slash when a new directory is created --s3-disable-checksum Don\[aq]t store MD5 checksum with object metadata --s3-disable-http2 Disable usage of http2 for S3 backends @@ -25221,19 +26977,22 @@ These can be set in the config file also. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key --s3-storage-class string The storage class to use when storing new objects in S3 --s3-sts-endpoint string Endpoint for STS - --s3-upload-concurrency int Concurrency for multipart uploads (default 4) + --s3-upload-concurrency int Concurrency for multipart uploads and copies (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) + --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --s3-version-at Time Show file versions as they were at the specified time (default off) + --s3-version-deleted Show deleted file markers when using versions --s3-versions Include old versions in directory listings --seafile-2fa Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn\[aq]t exist + --seafile-description string Description of the remote --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library --seafile-library-key string Library password (for encrypted libraries only) (obscured) @@ -25245,6 +27004,7 @@ These can be set in the config file also. --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-copy-is-hardlink Set to enable server side copies using hardlinks + --sftp-description string Description of the remote --sftp-disable-concurrent-reads If set don\[aq]t use concurrent reads --sftp-disable-concurrent-writes If set don\[aq]t use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -25279,6 +27039,7 @@ These can be set in the config file also. --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-client-id string OAuth Client Id --sharefile-client-secret string OAuth Client Secret + --sharefile-description string Description of the remote --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder @@ -25287,10 +27048,12 @@ These can be set in the config file also. --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default \[dq]http://127.0.0.1:9980\[dq]) + --sia-description string Description of the remote --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default \[dq]Sia-Agent\[dq]) --skip-links Don\[aq]t warn about skipped symlinks --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) + --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default \[dq]WORKGROUP\[dq]) --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-hide-special-share Hide special shares (e.g. print$) which users aren\[aq]t supposed to access (default true) @@ -25302,6 +27065,7 @@ These can be set in the config file also. --smb-user string SMB username (default \[dq]$USER\[dq]) --storj-access-grant string Access grant --storj-api-key string API key + --storj-description string Description of the remote --storj-passphrase string Encryption passphrase --storj-provider string Choose an authentication method (default \[dq]existing\[dq]) --storj-satellite-address string Satellite address (default \[dq]us1.storj.io\[dq]) @@ -25310,6 +27074,7 @@ These can be set in the config file also. --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id + --sugarsync-description string Description of the remote --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key @@ -25323,6 +27088,7 @@ These can be set in the config file also. --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) + --swift-description string Description of the remote --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default \[dq]public\[dq]) @@ -25342,17 +27108,21 @@ These can be set in the config file also. --union-action-policy string Policy to choose upstream on ACTION category (default \[dq]epall\[dq]) --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default \[dq]epmfs\[dq]) + --union-description string Description of the remote --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default \[dq]ff\[dq]) --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token + --uptobox-description string Description of the remote --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-private Set to make uploaded files private --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token + --webdav-description string Description of the remote --webdav-encoding string The encoding for the backend --webdav-headers CommaSepList Set HTTP headers for all transactions --webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi) + --webdav-owncloud-exclude-shares Exclude ownCloud shares --webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to @@ -25361,6 +27131,7 @@ These can be set in the config file also. --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret + --yandex-description string Description of the remote --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-token string OAuth Access Token as a JSON blob @@ -25368,6 +27139,7 @@ These can be set in the config file also. --zoho-auth-url string Auth server URL --zoho-client-id string OAuth Client Id --zoho-client-secret string OAuth Client Secret + --zoho-description string Description of the remote --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-region string Zoho region to connect to --zoho-token string OAuth Access Token as a JSON blob @@ -26098,12 +27870,21 @@ docker volume inspect my_vol .PP If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first. +.SS Bisync +.PP +\f[C]bisync\f[R] is \f[B]in beta\f[R] and is considered an \f[B]advanced +command\f[R], so use with care. +Make sure you have read and understood the entire +manual (https://rclone.org/bisync) (especially the Limitations section) +before using, or data loss can result. +Questions can be asked in the Rclone Forum (https://forum.rclone.org/). .SS Getting started .IP \[bu] 2 Install rclone (https://rclone.org/install/) and setup your remotes. .IP \[bu] 2 Bisync will create its working directory at -\f[C]\[ti]/.cache/rclone/bisync\f[R] on Linux or +\f[C]\[ti]/.cache/rclone/bisync\f[R] on Linux, +\f[C]/Users/yourusername/Library/Caches/rclone/bisync\f[R] on Mac, or \f[C]C:\[rs]Users\[rs]MyLogin\[rs]AppData\[rs]Local\[rs]rclone\[rs]bisync\f[R] on Windows. Make sure that this location is writable. @@ -26112,16 +27893,28 @@ Run bisync with the \f[C]--resync\f[R] flag, specifying the paths to the local and remote sync directory roots. .IP \[bu] 2 For successive sync runs, leave off the \f[C]--resync\f[R] flag. +(\f[B]Important!\f[R]) .IP \[bu] 2 Consider using a filters file for excluding unnecessary files and directories from the sync. .IP \[bu] 2 Consider setting up the --check-access feature for safety. .IP \[bu] 2 -On Linux, consider setting up a crontab entry. +On Linux or Mac, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains. .PP +For example, your first command might look like this: +.IP +.nf +\f[C] +rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run +\f[R] +.fi +.PP +If all looks good, run it again without \f[C]--dry-run\f[R]. +After that, remove \f[C]--resync\f[R] as well. +.PP Here is a typical run log (with timestamps removed for clarity): .IP .nf @@ -26182,36 +27975,36 @@ Positional arguments: Type \[aq]rclone listremotes\[aq] for list of configured remotes. Optional Flags: - --check-access Ensure expected \[ga]RCLONE_TEST\[ga] files are found on - both Path1 and Path2 filesystems, else abort. - --check-filename FILENAME Filename for \[ga]--check-access\[ga] (default: \[ga]RCLONE_TEST\[ga]) - --check-sync CHOICE Controls comparison of final listings: - \[ga]true | false | only\[ga] (default: true) - If set to \[ga]only\[ga], bisync will only compare listings - from the last run but skip actual sync. - --filters-file PATH Read filtering patterns from a file - --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. - If exceeded, the bisync run will abort. (default: 50%) - --force Bypass \[ga]--max-delete\[ga] safety check and run the sync. - Consider using with \[ga]--verbose\[ga] - --create-empty-src-dirs Sync creation and deletion of empty directories. - (Not compatible with --remove-empty-dirs) - --remove-empty-dirs Remove empty directories at the final cleanup step. - -1, --resync Performs the resync run. - Warning: Path1 files may overwrite Path2 versions. - Consider using \[ga]--verbose\[ga] or \[ga]--dry-run\[ga] first. - --ignore-listing-checksum Do not use checksums for listings - (add --ignore-checksum to additionally skip post-copy checksum checks) - --resilient Allow future runs to retry after certain less-serious errors, - instead of requiring --resync. Use at your own risk! - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --workdir PATH Use custom working directory (useful for testing). - (default: \[ga]\[ti]/.cache/rclone/bisync\[ga]) - -n, --dry-run Go through the motions - No files are copied/deleted. - -v, --verbose Increases logging verbosity. - May be specified more than once for more details. - -h, --help help for bisync + --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote. + --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote. + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default \[dq]true\[dq]) + --compare string Comma-separated list of bisync-specific compare options ex. \[aq]size,modtime,checksum\[aq] (default: \[aq]size,modtime\[aq]) + --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num) + --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default \[dq]none\[dq]) + --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: \[aq]conflict\[aq]) + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --no-slow-hash Ignore listing checksums only on backends where they are slow + --recover Automatically recover from interruptions without requiring --resync. + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first. + --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default \[dq]none\[dq]) + --retries int Retry operations this many times if they fail (requires --resilient). (default 3) + --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) + --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls. + --workdir string Use custom working dir - useful for testing. (default: {WORKDIR}) + --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%) + -n, --dry-run Go through the motions - No files are copied/deleted. + -v, --verbose Increases logging verbosity. May be specified more than once for more details. \f[R] .fi .PP @@ -26251,25 +28044,16 @@ will have ALL empty directories purged as the last step in the process. .PP This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. -Path2 files that do not exist in Path1 will be copied to Path1, and the -process will then copy the Path1 tree to Path2. +By default, Path2 files that do not exist in Path1 will be copied to +Path1, and the process will then copy the Path1 tree to Path2. .PP -The \f[C]--resync\f[R] sequence is roughly equivalent to: +The \f[C]--resync\f[R] sequence is roughly equivalent to the following +(but see \f[C]--resync-mode\f[R] for other options): .IP .nf \f[C] -rclone copy Path2 Path1 --ignore-existing -rclone copy Path1 Path2 -\f[R] -.fi -.PP -Or, if using \f[C]--create-empty-src-dirs\f[R]: -.IP -.nf -\f[C] -rclone copy Path2 Path1 --ignore-existing -rclone copy Path1 Path2 --create-empty-src-dirs -rclone copy Path2 Path1 --create-empty-src-dirs +rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs] +rclone copy Path1 Path2 [--create-empty-src-dirs] \f[R] .fi .PP @@ -26279,10 +28063,12 @@ This is required for safety - that bisync can verify that both paths are valid. .PP When using \f[C]--resync\f[R], a newer version of a file on the Path2 -filesystem will be overwritten by the Path1 filesystem version. +filesystem will (by default) be overwritten by the Path1 filesystem +version. (Note that this is NOT entirely -symmetrical (https://github.com/rclone/rclone/issues/5681#issuecomment-938761815).) -Carefully evaluate deltas using +symmetrical (https://github.com/rclone/rclone/issues/5681#issuecomment-938761815), +and more symmetrical options can be specified with the +\f[C]--resync-mode\f[R] flag.) Carefully evaluate deltas using --dry-run (https://rclone.org/flags/#non-backend-flags). .PP For a resync run, one of the paths may be empty (no files in the path @@ -26295,6 +28081,125 @@ fails with \f[C]Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst\f[R] This is a safety check that an unexpected empty path does not result in deleting \f[B]everything\f[R] in the other path. +.PP +Note that \f[C]--resync\f[R] implies \f[C]--resync-mode path1\f[R] +unless a different \f[C]--resync-mode\f[R] is explicitly specified. +It is not necessary to use both the \f[C]--resync\f[R] and +\f[C]--resync-mode\f[R] flags -- either one is sufficient without the +other. +.PP +\f[B]Note:\f[R] \f[C]--resync\f[R] (including \f[C]--resync-mode\f[R]) +should only be used under three specific (rare) circumstances: 1. +It is your \f[I]first\f[R] bisync run (between these two paths) 2. +You\[aq]ve just made changes to your bisync settings (such as editing +the contents of your \f[C]--filters-file\f[R]) 3. +There was an error on the prior run, and as a result, bisync now +requires \f[C]--resync\f[R] to recover +.PP +The rest of the time, you should \f[I]omit\f[R] \f[C]--resync\f[R]. +The reason is because \f[C]--resync\f[R] will only \f[I]copy\f[R] (not +\f[I]sync\f[R]) each side to the other. +Therefore, if you included \f[C]--resync\f[R] for every bisync run, it +would never be possible to delete a file -- the deleted file would +always keep reappearing at the end of every run (because it\[aq]s being +copied from the other side where it still exists). +Similarly, renaming a file would always result in a duplicate copy (both +old and new name) on both sides. +.PP +If you find that frequent interruptions from #3 are an issue, rather +than automatically running \f[C]--resync\f[R], the recommended +alternative is to use the \f[C]--resilient\f[R], \f[C]--recover\f[R], +and \f[C]--conflict-resolve\f[R] flags, (along with Graceful Shutdown +mode, when needed) for a very robust \[dq]set-it-and-forget-it\[dq] +bisync setup that can automatically bounce back from almost any +interruption it might encounter. +Consider adding something like the following: +.IP +.nf +\f[C] +--resilient --recover --max-lock 2m --conflict-resolve newer +\f[R] +.fi +.SS --resync-mode CHOICE +.PP +In the event that a file differs on both sides during a +\f[C]--resync\f[R], \f[C]--resync-mode\f[R] controls which version will +overwrite the other. +The supported options are similar to \f[C]--conflict-resolve\f[R]. +For all of the following options, the version that is kept is referred +to as the \[dq]winner\[dq], and the version that is overwritten +(deleted) is referred to as the \[dq]loser\[dq]. +The options are named after the \[dq]winner\[dq]: +.IP \[bu] 2 +\f[C]path1\f[R] - (the default) - the version from Path1 is +unconditionally considered the winner (regardless of \f[C]modtime\f[R] +and \f[C]size\f[R], if any). +This can be useful if one side is more trusted or up-to-date than the +other, at the time of the \f[C]--resync\f[R]. +.IP \[bu] 2 +\f[C]path2\f[R] - same as \f[C]path1\f[R], except the path2 version is +considered the winner. +.IP \[bu] 2 +\f[C]newer\f[R] - the newer file (by \f[C]modtime\f[R]) is considered +the winner, regardless of which side it came from. +This may result in having a mix of some winners from Path1, and some +winners from Path2. +(The implementation is analogous to running +\f[C]rclone copy --update\f[R] in both directions.) +.IP \[bu] 2 +\f[C]older\f[R] - same as \f[C]newer\f[R], except the older file is +considered the winner, and the newer file is considered the loser. +.IP \[bu] 2 +\f[C]larger\f[R] - the larger file (by \f[C]size\f[R]) is considered the +winner (regardless of \f[C]modtime\f[R], if any). +This can be a useful option for remotes without \f[C]modtime\f[R] +support, or with the kinds of files (such as logs) that tend to grow but +not shrink, over time. +.IP \[bu] 2 +\f[C]smaller\f[R] - the smaller file (by \f[C]size\f[R]) is considered +the winner (regardless of \f[C]modtime\f[R], if any). +.PP +For all of the above options, note the following: - If either of the +underlying remotes lacks support for the chosen method, it will be +ignored and will fall back to the default of \f[C]path1\f[R]. +(For example, if \f[C]--resync-mode newer\f[R] is set, but one of the +paths uses a remote that doesn\[aq]t support \f[C]modtime\f[R].) - If a +winner can\[aq]t be determined because the chosen method\[aq]s attribute +is missing or equal, it will be ignored, and bisync will instead try to +determine whether the files differ by looking at the other +\f[C]--compare\f[R] methods in effect. +(For example, if \f[C]--resync-mode newer\f[R] is set, but the Path1 and +Path2 modtimes are identical, bisync will compare the sizes.) If bisync +concludes that they differ, preference is given to whichever is the +\[dq]source\[dq] at that moment. +(In practice, this gives a slight advantage to Path2, as the 2to1 copy +comes before the 1to2 copy.) If the files \f[I]do not\f[R] differ, +nothing is copied (as both sides are already correct). +- These options apply only to files that exist on both sides (with the +same name and relative path). +Files that exist \f[I]only\f[R] on one side and not the other are +\f[I]always\f[R] copied to the other, during \f[C]--resync\f[R] (this is +one of the main differences between resync and non-resync runs.). +- \f[C]--conflict-resolve\f[R], \f[C]--conflict-loser\f[R], and +\f[C]--conflict-suffix\f[R] do not apply during \f[C]--resync\f[R], and +unlike these flags, nothing is renamed during \f[C]--resync\f[R]. +When a file differs on both sides during \f[C]--resync\f[R], one version +always overwrites the other (much like in \f[C]rclone copy\f[R].) +(Consider using \f[C]--backup-dir\f[R] to retain a backup of the losing +version.) - Unlike for \f[C]--conflict-resolve\f[R], +\f[C]--resync-mode none\f[R] is not a valid option (or rather, it will +be interpreted as \[dq]no resync\[dq], unless \f[C]--resync\f[R] has +also been specified, in which case it will be ignored.) - Winners and +losers are decided at the individual file-level only (there is not +currently an option to pick an entire winning directory atomically, +although the \f[C]path1\f[R] and \f[C]path2\f[R] options typically +produce a similar result.) - To maintain backward-compatibility, the +\f[C]--resync\f[R] flag implies \f[C]--resync-mode path1\f[R] unless a +different \f[C]--resync-mode\f[R] is explicitly specified. +Similarly, all \f[C]--resync-mode\f[R] options (except \f[C]none\f[R]) +imply \f[C]--resync\f[R], so it is not necessary to use both the +\f[C]--resync\f[R] and \f[C]--resync-mode\f[R] flags simultaneously -- +either one is sufficient without the other. .SS --check-access .PP Access check files are an additional safety measure against data loss. @@ -26337,6 +28242,185 @@ One or more files having this filename must exist, synchronized between your source and destination filesets, in order for \f[C]--check-access\f[R] to succeed. See --check-access for additional details. +.SS --compare +.PP +As of \f[C]v1.66\f[R], bisync fully supports comparing based on any +combination of size, modtime, and checksum (lifting the prior +restriction on backends without modtime support.) +.PP +By default (without the \f[C]--compare\f[R] flag), bisync inherits the +same comparison options as \f[C]sync\f[R] (that is: \f[C]size\f[R] and +\f[C]modtime\f[R] by default, unless modified with flags such as +\f[C]--checksum\f[R] (https://rclone.org/docs/#c-checksum) or +\f[C]--size-only\f[R].) +.PP +If the \f[C]--compare\f[R] flag is set, it will override these defaults. +This can be useful if you wish to compare based on combinations not +currently supported in \f[C]sync\f[R], such as comparing all three of +\f[C]size\f[R] AND \f[C]modtime\f[R] AND \f[C]checksum\f[R] +simultaneously (or just \f[C]modtime\f[R] AND \f[C]checksum\f[R]). +.PP +\f[C]--compare\f[R] takes a comma-separated list, with the currently +supported values being \f[C]size\f[R], \f[C]modtime\f[R], and +\f[C]checksum\f[R]. +For example, if you want to compare size and checksum, but not modtime, +you would do: +.IP +.nf +\f[C] +--compare size,checksum +\f[R] +.fi +.PP +Or if you want to compare all three: +.IP +.nf +\f[C] +--compare size,modtime,checksum +\f[R] +.fi +.PP +\f[C]--compare\f[R] overrides any conflicting flags. +For example, if you set the conflicting flags +\f[C]--compare checksum --size-only\f[R], \f[C]--size-only\f[R] will be +ignored, and bisync will compare checksum and not size. +To avoid confusion, it is recommended to use \f[I]either\f[R] +\f[C]--compare\f[R] or the normal \f[C]sync\f[R] flags, but not both. +.PP +If \f[C]--compare\f[R] includes \f[C]checksum\f[R] and both remotes +support checksums but have no hash types in common with each other, +checksums will be considered \f[I]only\f[R] for comparisons within the +same side (to determine what has changed since the prior sync), but not +for comparisons against the opposite side. +If one side supports checksums and the other does not, checksums will +only be considered on the side that supports them. +.PP +When comparing with \f[C]checksum\f[R] and/or \f[C]size\f[R] without +\f[C]modtime\f[R], bisync cannot determine whether a file is +\f[C]newer\f[R] or \f[C]older\f[R] -- only whether it is +\f[C]changed\f[R] or \f[C]unchanged\f[R]. +(If it is \f[C]changed\f[R] on both sides, bisync still does the +standard equality-check to avoid declaring a sync conflict unless it +absolutely has to.) +.PP +It is recommended to do a \f[C]--resync\f[R] when changing +\f[C]--compare\f[R] settings, as otherwise your prior listing files may +not contain the attributes you wish to compare (for example, they will +not have stored checksums if you were not previously comparing +checksums.) +.SS --ignore-listing-checksum +.PP +When \f[C]--checksum\f[R] or \f[C]--compare checksum\f[R] is set, bisync +will retrieve (or generate) checksums (for backends that support them) +when creating the listings for both paths, and store the checksums in +the listing files. +\f[C]--ignore-listing-checksum\f[R] will disable this behavior, which +may speed things up considerably, especially on backends (such as +local (https://rclone.org/local/)) where hashes must be computed on the +fly instead of retrieved. +Please note the following: +.IP \[bu] 2 +As of \f[C]v1.66\f[R], \f[C]--ignore-listing-checksum\f[R] is now +automatically set when neither \f[C]--checksum\f[R] nor +\f[C]--compare checksum\f[R] are in use (as the checksums would not be +used for anything.) +.IP \[bu] 2 +\f[C]--ignore-listing-checksum\f[R] is NOT the same as +\f[C]--ignore-checksum\f[R] (https://rclone.org/docs/#ignore-checksum), +and you may wish to use one or the other, or both. +In a nutshell: \f[C]--ignore-listing-checksum\f[R] controls whether +checksums are considered when scanning for diffs, while +\f[C]--ignore-checksum\f[R] controls whether checksums are considered +during the copy/sync operations that follow, if there ARE diffs. +.IP \[bu] 2 +Unless \f[C]--ignore-listing-checksum\f[R] is passed, bisync currently +computes hashes for one path \f[I]even when there\[aq]s no common hash +with the other path\f[R] (for example, a +crypt (https://rclone.org/crypt/#modification-times-and-hashes) remote.) +This can still be beneficial, as the hashes will still be used to detect +changes within the same side (if \f[C]--checksum\f[R] or +\f[C]--compare checksum\f[R] is set), even if they can\[aq]t be used to +compare against the opposite side. +.IP \[bu] 2 +If you wish to ignore listing checksums \f[I]only\f[R] on remotes where +they are slow to compute, consider using \f[C]--no-slow-hash\f[R] (or +\f[C]--slow-hash-sync-only\f[R]) instead of +\f[C]--ignore-listing-checksum\f[R]. +.IP \[bu] 2 +If \f[C]--ignore-listing-checksum\f[R] is used simultaneously with +\f[C]--compare checksum\f[R] (or \f[C]--checksum\f[R]), checksums will +be ignored for bisync deltas, but still considered during the sync +operations that follow (if deltas are detected based on modtime and/or +size.) +.SS --no-slow-hash +.PP +On some remotes (notably \f[C]local\f[R]), checksums can dramatically +slow down a bisync run, because hashes cannot be stored and need to be +computed in real-time when they are requested. +On other remotes (such as \f[C]drive\f[R]), they add practically no time +at all. +The \f[C]--no-slow-hash\f[R] flag will automatically skip checksums on +remotes where they are slow, while still comparing them on others +(assuming \f[C]--compare\f[R] includes \f[C]checksum\f[R].) This can be +useful when one of your bisync paths is slow but you still want to check +checksums on the other, for a more robust sync. +.SS --slow-hash-sync-only +.PP +Same as \f[C]--no-slow-hash\f[R], except slow hashes are still +considered during sync calls. +They are still NOT considered for determining deltas, nor or they +included in listings. +They are also skipped during \f[C]--resync\f[R]. +The main use case for this flag is when you have a large number of +files, but relatively few of them change from run to run -- so you +don\[aq]t want to check your entire tree every time (it would take too +long), but you still want to consider checksums for the smaller group of +files for which a \f[C]modtime\f[R] or \f[C]size\f[R] change was +detected. +Keep in mind that this speed savings comes with a safety trade-off: if a +file\[aq]s content were to change without a change to its +\f[C]modtime\f[R] or \f[C]size\f[R], bisync would not detect it, and it +would not be synced. +.PP +\f[C]--slow-hash-sync-only\f[R] is only useful if both remotes share a +common hash type (if they don\[aq]t, bisync will automatically fall back +to \f[C]--no-slow-hash\f[R].) Both \f[C]--no-slow-hash\f[R] and +\f[C]--slow-hash-sync-only\f[R] have no effect without +\f[C]--compare checksum\f[R] (or \f[C]--checksum\f[R]). +.SS --download-hash +.PP +If \f[C]--download-hash\f[R] is set, bisync will use best efforts to +obtain an MD5 checksum by downloading and computing on-the-fly, when +checksums are not otherwise available (for example, a remote that +doesn\[aq]t support them.) Note that since rclone has to download the +entire file, this may dramatically slow down your bisync runs, and is +also likely to use a lot of data, so it is probably not practical for +bisync paths with a large total file size. +However, it can be a good option for syncing small-but-important files +with maximum accuracy (for example, a source code repo on a +\f[C]crypt\f[R] remote.) An additional advantage over methods like +\f[C]cryptcheck\f[R] (https://rclone.org/commands/rclone_cryptcheck/) is +that the original file is not required for comparison (for example, +\f[C]--download-hash\f[R] can be used to bisync two different crypt +remotes with different passwords.) +.PP +When \f[C]--download-hash\f[R] is set, bisync still looks for more +efficient checksums first, and falls back to downloading only when none +are found. +It takes priority over conflicting flags such as +\f[C]--no-slow-hash\f[R]. +\f[C]--download-hash\f[R] is not suitable for Google Docs and other +files of unknown size, as their checksums would change from run to run +(due to small variances in the internals of the generated export file.) +Therefore, bisync automatically skips \f[C]--download-hash\f[R] for +files with a size less than 0. +.PP +See also: \f[C]Hasher\f[R] (https://rclone.org/hasher/) backend, +\f[C]cryptcheck\f[R] (https://rclone.org/commands/rclone_cryptcheck/) +command, +\f[C]rclone check --download\f[R] (https://rclone.org/commands/rclone_check/) +option, \f[C]md5sum\f[R] (https://rclone.org/commands/rclone_md5sum/) +command .SS --max-delete .PP As a safety check, if greater than the \f[C]--max-delete\f[R] percent of @@ -26381,6 +28465,195 @@ the MD5 hash of the current filters file and compares it to the hash stored in the \f[C].md5\f[R] file. If they don\[aq]t match, the run aborts with a critical error and thus forces you to do a \f[C]--resync\f[R], likely avoiding a disaster. +.SS --conflict-resolve CHOICE +.PP +In bisync, a \[dq]conflict\[dq] is a file that is \f[I]new\f[R] or +\f[I]changed\f[R] on \f[I]both sides\f[R] (relative to the prior run) +AND is \f[I]not currently identical\f[R] on both sides. +\f[C]--conflict-resolve\f[R] controls how bisync handles such a +scenario. +The currently supported options are: +.IP \[bu] 2 +\f[C]none\f[R] - (the default) - do not attempt to pick a winner, keep +and rename both files according to \f[C]--conflict-loser\f[R] and +\f[C]--conflict-suffix\f[R] settings. +For example, with the default settings, \f[C]file.txt\f[R] on Path1 is +renamed \f[C]file.txt.conflict1\f[R] and \f[C]file.txt\f[R] on Path2 is +renamed \f[C]file.txt.conflict2\f[R]. +Both are copied to the opposite path during the run, so both sides end +up with a copy of both files. +(As \f[C]none\f[R] is the default, it is not necessary to specify +\f[C]--conflict-resolve none\f[R] -- you can just omit the flag.) +.IP \[bu] 2 +\f[C]newer\f[R] - the newer file (by \f[C]modtime\f[R]) is considered +the winner and is copied without renaming. +The older file (the \[dq]loser\[dq]) is handled according to +\f[C]--conflict-loser\f[R] and \f[C]--conflict-suffix\f[R] settings +(either renamed or deleted.) For example, if \f[C]file.txt\f[R] on Path1 +is newer than \f[C]file.txt\f[R] on Path2, the result on both sides +(with other default settings) will be \f[C]file.txt\f[R] (winner from +Path1) and \f[C]file.txt.conflict1\f[R] (loser from Path2). +.IP \[bu] 2 +\f[C]older\f[R] - same as \f[C]newer\f[R], except the older file is +considered the winner, and the newer file is considered the loser. +.IP \[bu] 2 +\f[C]larger\f[R] - the larger file (by \f[C]size\f[R]) is considered the +winner (regardless of \f[C]modtime\f[R], if any). +.IP \[bu] 2 +\f[C]smaller\f[R] - the smaller file (by \f[C]size\f[R]) is considered +the winner (regardless of \f[C]modtime\f[R], if any). +.IP \[bu] 2 +\f[C]path1\f[R] - the version from Path1 is unconditionally considered +the winner (regardless of \f[C]modtime\f[R] and \f[C]size\f[R], if any). +This can be useful if one side is usually more trusted or up-to-date +than the other. +.IP \[bu] 2 +\f[C]path2\f[R] - same as \f[C]path1\f[R], except the path2 version is +considered the winner. +.PP +For all of the above options, note the following: - If either of the +underlying remotes lacks support for the chosen method, it will be +ignored and fall back to \f[C]none\f[R]. +(For example, if \f[C]--conflict-resolve newer\f[R] is set, but one of +the paths uses a remote that doesn\[aq]t support \f[C]modtime\f[R].) - +If a winner can\[aq]t be determined because the chosen method\[aq]s +attribute is missing or equal, it will be ignored and fall back to +\f[C]none\f[R]. +(For example, if \f[C]--conflict-resolve newer\f[R] is set, but the +Path1 and Path2 modtimes are identical, even if the sizes may differ.) - +If the file\[aq]s content is currently identical on both sides, it is +not considered a \[dq]conflict\[dq], even if new or changed on both +sides since the prior sync. +(For example, if you made a change on one side and then synced it to the +other side by other means.) Therefore, none of the conflict resolution +flags apply in this scenario. +- The conflict resolution flags do not apply during a +\f[C]--resync\f[R], as there is no \[dq]prior run\[dq] to speak of (but +see \f[C]--resync-mode\f[R] for similar options.) +.SS --conflict-loser CHOICE +.PP +\f[C]--conflict-loser\f[R] determines what happens to the +\[dq]loser\[dq] of a sync conflict (when \f[C]--conflict-resolve\f[R] +determines a winner) or to both files (when there is no winner.) The +currently supported options are: +.IP \[bu] 2 +\f[C]num\f[R] - (the default) - auto-number the conflicts by +automatically appending the next available number to the +\f[C]--conflict-suffix\f[R], in chronological order. +For example, with the default settings, the first conflict for +\f[C]file.txt\f[R] will be renamed \f[C]file.txt.conflict1\f[R]. +If \f[C]file.txt.conflict1\f[R] already exists, +\f[C]file.txt.conflict2\f[R] will be used instead (etc., up to a maximum +of 9223372036854775807 conflicts.) +.IP \[bu] 2 +\f[C]pathname\f[R] - rename the conflicts according to which side they +came from, which was the default behavior prior to \f[C]v1.66\f[R]. +For example, with \f[C]--conflict-suffix path\f[R], \f[C]file.txt\f[R] +from Path1 will be renamed \f[C]file.txt.path1\f[R], and +\f[C]file.txt\f[R] from Path2 will be renamed \f[C]file.txt.path2\f[R]. +If two non-identical suffixes are provided (ex. +\f[C]--conflict-suffix cloud,local\f[R]), the trailing digit is omitted. +Importantly, note that with \f[C]pathname\f[R], there is no +auto-numbering beyond \f[C]2\f[R], so if \f[C]file.txt.path2\f[R] +somehow already exists, it will be overwritten. +Using a dynamic date variable in your \f[C]--conflict-suffix\f[R] (see +below) is one possible way to avoid this. +Note also that conflicts-of-conflicts are possible, if the original +conflict is not manually resolved -- for example, if for some reason you +edited \f[C]file.txt.path1\f[R] on both sides, and those edits were +different, the result would be \f[C]file.txt.path1.path1\f[R] and +\f[C]file.txt.path1.path2\f[R] (in addition to +\f[C]file.txt.path2\f[R].) +.IP \[bu] 2 +\f[C]delete\f[R] - keep the winner only and delete the loser, instead of +renaming it. +If a winner cannot be determined (see \f[C]--conflict-resolve\f[R] for +details on how this could happen), \f[C]delete\f[R] is ignored and the +default \f[C]num\f[R] is used instead (i.e. +both versions are kept and renamed, and neither is deleted.) +\f[C]delete\f[R] is inherently the most destructive option, so use it +only with care. +.PP +For all of the above options, note that if a winner cannot be determined +(see \f[C]--conflict-resolve\f[R] for details on how this could happen), +or if \f[C]--conflict-resolve\f[R] is not in use, \f[I]both\f[R] files +will be renamed. +.SS --conflict-suffix STRING[,STRING] +.PP +\f[C]--conflict-suffix\f[R] controls the suffix that is appended when +bisync renames a \f[C]--conflict-loser\f[R] (default: +\f[C]conflict\f[R]). +\f[C]--conflict-suffix\f[R] will accept either one string or two +comma-separated strings to assign different suffixes to Path1 vs. +Path2. +This may be helpful later in identifying the source of the conflict. +(For example, +\f[C]--conflict-suffix dropboxconflict,laptopconflict\f[R]) +.PP +With \f[C]--conflict-loser num\f[R], a number is always appended to the +suffix. +With \f[C]--conflict-loser pathname\f[R], a number is appended only when +one suffix is specified (or when two identical suffixes are specified.) +i.e. +with \f[C]--conflict-loser pathname\f[R], all of the following would +produce exactly the same result: +.IP +.nf +\f[C] +--conflict-suffix path +--conflict-suffix path,path +--conflict-suffix path1,path2 +\f[R] +.fi +.PP +Suffixes may be as short as 1 character. +By default, the suffix is appended after any other extensions (ex. +\f[C]file.jpg.conflict1\f[R]), however, this can be changed with the +\f[C]--suffix-keep-extension\f[R] (https://rclone.org/docs/#suffix-keep-extension) +flag (i.e. +to instead result in \f[C]file.conflict1.jpg\f[R]). +.PP +\f[C]--conflict-suffix\f[R] supports several \f[I]dynamic date +variables\f[R] when enclosed in curly braces as globs. +This can be helpful to track the date and/or time that each conflict was +handled by bisync. +For example: +.IP +.nf +\f[C] +--conflict-suffix {DateOnly}-conflict +// result: myfile.txt.2006-01-02-conflict1 +\f[R] +.fi +.PP +All of the formats described +here (https://pkg.go.dev/time#pkg-constants) and +here (https://pkg.go.dev/time#example-Time.Format) are supported, but +take care to ensure that your chosen format does not use any characters +that are illegal on your remotes (for example, macOS does not allow +colons in filenames, and slashes are also best avoided as they are often +interpreted as directory separators.) To address this particular issue, +an additional \f[C]{MacFriendlyTime}\f[R] (or just \f[C]{mac}\f[R]) +option is supported, which results in \f[C]2006-01-02 0304PM\f[R]. +.PP +Note that \f[C]--conflict-suffix\f[R] is entirely separate from +rclone\[aq]s main +\f[C]--sufix\f[R] (https://rclone.org/docs/#suffix-suffix) flag. +This is intentional, as users may wish to use both flags simultaneously, +if also using \f[C]--backup-dir\f[R]. +.PP +Finally, note that the default in bisync prior to \f[C]v1.66\f[R] was to +rename conflicts with \f[C]..path1\f[R] and \f[C]..path2\f[R] (with two +periods, and \f[C]path\f[R] instead of \f[C]conflict\f[R].) Bisync now +defaults to a single dot instead of a double dot, but additional dots +can be added by including them in the specified suffix string. +For example, for behavior equivalent to the previous default, use: +.IP +.nf +\f[C] +[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path +\f[R] +.fi .SS --check-sync .PP Enabled by default, the check-sync function checks that all of the same @@ -26402,47 +28675,67 @@ The check may be run manually with \f[C]--check-sync=only\f[R]. It runs only the integrity check and terminates without actually synching. .PP -See also: Concurrent modifications -.SS --ignore-listing-checksum +Note that currently, \f[C]--check-sync\f[R] \f[B]only checks listing +snapshots and NOT the actual files on the remotes.\f[R] Note also that +the listing snapshots will not know about any changes that happened +during or after the latest bisync run, as those will be discovered on +the next run. +Therefore, while listings should always match \f[I]each other\f[R] at +the end of a bisync run, it is \f[I]expected\f[R] that they will not +match the underlying remotes, nor will the remotes match each other, if +there were changes during or after the run. +This is normal, and any differences will be detected and synced on the +next run. .PP -By default, bisync will retrieve (or generate) checksums (for backends -that support them) when creating the listings for both paths, and store -the checksums in the listing files. -\f[C]--ignore-listing-checksum\f[R] will disable this behavior, which -may speed things up considerably, especially on backends (such as -local (https://rclone.org/local/)) where hashes must be computed on the -fly instead of retrieved. -Please note the following: -.IP \[bu] 2 -While checksums are (by default) generated and stored in the listing -files, they are NOT currently used for determining diffs (deltas). -It is anticipated that full checksum support will be added in a future -version. -.IP \[bu] 2 -\f[C]--ignore-listing-checksum\f[R] is NOT the same as -\f[C]--ignore-checksum\f[R] (https://rclone.org/docs/#ignore-checksum), -and you may wish to use one or the other, or both. -In a nutshell: \f[C]--ignore-listing-checksum\f[R] controls whether -checksums are considered when scanning for diffs, while -\f[C]--ignore-checksum\f[R] controls whether checksums are considered -during the copy/sync operations that follow, if there ARE diffs. -.IP \[bu] 2 -Unless \f[C]--ignore-listing-checksum\f[R] is passed, bisync currently -computes hashes for one path \f[I]even when there\[aq]s no common hash -with the other path\f[R] (for example, a -crypt (https://rclone.org/crypt/#modification-times-and-hashes) remote.) -.IP \[bu] 2 -If both paths support checksums and have a common hash, AND -\f[C]--ignore-listing-checksum\f[R] was not specified when creating the -listings, \f[C]--check-sync=only\f[R] can be used to compare Path1 vs. -Path2 checksums (as of the time the previous listings were created.) -However, \f[C]--check-sync=only\f[R] will NOT include checksums if the -previous listings were generated on a run using -\f[C]--ignore-listing-checksum\f[R]. -For a more robust integrity check of the current state, consider using -\f[C]check\f[R] (or +For a robust integrity check of the current state of the remotes (as +opposed to just their listing snapshots), consider using \f[C]check\f[R] +(or \f[C]cryptcheck\f[R] (https://rclone.org/commands/rclone_cryptcheck/), -if at least one path is a \f[C]crypt\f[R] remote.) +if at least one path is a \f[C]crypt\f[R] remote) instead of +\f[C]--check-sync\f[R], keeping in mind that differences are expected if +files changed during or after your last bisync run. +.PP +For example, a possible sequence could look like this: +.IP "1." 3 +Normally scheduled bisync run: +.IP +.nf +\f[C] +rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient +\f[R] +.fi +.IP "2." 3 +Periodic independent integrity check (perhaps scheduled nightly or +weekly): +.IP +.nf +\f[C] +rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt +\f[R] +.fi +.IP "3." 3 +If diffs are found, you have some choices to correct them. +If one side is more up-to-date and you want to make the other side match +it, you could run: +.IP +.nf +\f[C] +rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v +\f[R] +.fi +.PP +(or switch Path1 and Path2 to make Path2 the source-of-truth) +.PP +Or, if neither side is totally up-to-date, you could run a +\f[C]--resync\f[R] to bring them back into agreement (but remember that +this could cause deleted files to re-appear.) +.PP +*Note also that \f[C]rclone check\f[R] does not currently include empty +directories, so if you want to know if any empty directories are out of +sync, consider alternatively running the above \f[C]rclone sync\f[R] +command with \f[C]--dry-run\f[R] added. +.PP +See also: Concurrent modifications, \f[C]--resilient\f[R] .SS --resilient .PP \f[B]\f[BI]Caution: this is an experimental feature. Use at your own @@ -26475,6 +28768,135 @@ Certain more serious errors will still enforce a \f[C]--resync\f[R] lockout, even in \f[C]--resilient\f[R] mode, to prevent data loss. .PP Behavior of \f[C]--resilient\f[R] may change in a future version. +(See also: \f[C]--recover\f[R], \f[C]--max-lock\f[R], Graceful Shutdown) +.SS --recover +.PP +If \f[C]--recover\f[R] is set, in the event of a sudden interruption or +other un-graceful shutdown, bisync will attempt to automatically recover +on the next run, instead of requiring \f[C]--resync\f[R]. +Bisync is able to recover robustly by keeping one \[dq]backup\[dq] +listing at all times, representing the state of both paths after the +last known successful sync. +Bisync can then compare the current state with this snapshot to +determine which changes it needs to retry. +Changes that were synced after this snapshot (during the run that was +later interrupted) will appear to bisync as if they are \[dq]new or +changed on both sides\[dq], but in most cases this is not a problem, as +bisync will simply do its usual \[dq]equality check\[dq] and learn that +no action needs to be taken on these files, since they are already +identical on both sides. +.PP +In the rare event that a file is synced successfully during a run that +later aborts, and then that same file changes AGAIN before the next run, +bisync will think it is a sync conflict, and handle it accordingly. +(From bisync\[aq]s perspective, the file has changed on both sides since +the last trusted sync, and the files on either side are not currently +identical.) Therefore, \f[C]--recover\f[R] carries with it a slightly +increased chance of having conflicts -- though in practice this is +pretty rare, as the conditions required to cause it are quite specific. +This risk can be reduced by using bisync\[aq]s \[dq]Graceful +Shutdown\[dq] mode (triggered by sending \f[C]SIGINT\f[R] or +\f[C]Ctrl+C\f[R]), when you have the choice, instead of forcing a sudden +termination. +.PP +\f[C]--recover\f[R] and \f[C]--resilient\f[R] are similar, but distinct +-- the main difference is that \f[C]--resilient\f[R] is about +\f[I]retrying\f[R], while \f[C]--recover\f[R] is about +\f[I]recovering\f[R]. +Most users will probably want both. +\f[C]--resilient\f[R] allows retrying when bisync has chosen to abort +itself due to safety features such as failing \f[C]--check-access\f[R] +or detecting a filter change. +\f[C]--resilient\f[R] does not cover external interruptions such as a +user shutting down their computer in the middle of a sync -- that is +what \f[C]--recover\f[R] is for. +.SS --max-lock +.PP +Bisync uses lock files as a safety feature to prevent interference from +other bisync runs while it is running. +Bisync normally removes these lock files at the end of a run, but if +bisync is abruptly interrupted, these files will be left behind. +By default, they will lock out all future runs, until the user has a +chance to manually check things out and remove the lock. +As an alternative, \f[C]--max-lock\f[R] can be used to make them +automatically expire after a certain period of time, so that future runs +are not locked out forever, and auto-recovery is possible. +\f[C]--max-lock\f[R] can be any duration \f[C]2m\f[R] or greater (or +\f[C]0\f[R] to disable). +If set, lock files older than this will be considered \[dq]expired\[dq], +and future runs will be allowed to disregard them and proceed. +(Note that the \f[C]--max-lock\f[R] duration must be set by the process +that left the lock file -- not the later one interpreting it.) +.PP +If set, bisync will also \[dq]renew\[dq] these lock files every +\f[C]--max-lock minus one minute\f[R] throughout a run, for extra +safety. +(For example, with \f[C]--max-lock 5m\f[R], bisync would renew the lock +file (for another 5 minutes) every 4 minutes until the run has +completed.) In other words, it should not be possible for a lock file to +pass its expiration time while the process that created it is still +running -- and you can therefore be reasonably sure that any +\f[I]expired\f[R] lock file you may find was left there by an +interrupted run, not one that is still running and just taking awhile. +.PP +If \f[C]--max-lock\f[R] is \f[C]0\f[R] or not set, the default is that +lock files will never expire, and will block future runs (of these same +two bisync paths) indefinitely. +.PP +For maximum resilience from disruptions, consider setting a relatively +short duration like \f[C]--max-lock 2m\f[R] along with +\f[C]--resilient\f[R] and \f[C]--recover\f[R], and a relatively frequent +cron schedule. +The result will be a very robust \[dq]set-it-and-forget-it\[dq] bisync +run that can automatically bounce back from almost any interruption it +might encounter, without requiring the user to get involved and run a +\f[C]--resync\f[R]. +(See also: Graceful Shutdown mode) +.SS --backup-dir1 and --backup-dir2 +.PP +As of \f[C]v1.66\f[R], +\f[C]--backup-dir\f[R] (https://rclone.org/docs/#backup-dir-dir) is +supported in bisync. +Because \f[C]--backup-dir\f[R] must be a non-overlapping path on the +same remote, Bisync has introduced new \f[C]--backup-dir1\f[R] and +\f[C]--backup-dir2\f[R] flags to support separate backup-dirs for +\f[C]Path1\f[R] and \f[C]Path2\f[R] (bisyncing between different remotes +with \f[C]--backup-dir\f[R] would not otherwise be possible.) +\f[C]--backup-dir1\f[R] and \f[C]--backup-dir2\f[R] can use different +remotes from each other, but \f[C]--backup-dir1\f[R] must use the same +remote as \f[C]Path1\f[R], and \f[C]--backup-dir2\f[R] must use the same +remote as \f[C]Path2\f[R]. +Each backup directory must not overlap its respective bisync Path +without being excluded by a filter rule. +.PP +The standard \f[C]--backup-dir\f[R] will also work, if both paths use +the same remote (but note that deleted files from both paths would be +mixed together in the same dir). +If either \f[C]--backup-dir1\f[R] and \f[C]--backup-dir2\f[R] are set, +they will override \f[C]--backup-dir\f[R]. +.PP +Example: +.IP +.nf +\f[C] +rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case +\f[R] +.fi +.PP +In this example, if the user deletes a file in +\f[C]/Users/someuser/some/local/path/Bisync\f[R], bisync will propagate +the delete to the other side by moving the corresponding file from +\f[C]gdrive:Bisync\f[R] to \f[C]gdrive:BackupDir\f[R]. +If the user deletes a file from \f[C]gdrive:Bisync\f[R], bisync moves it +from \f[C]/Users/someuser/some/local/path/Bisync\f[R] to +\f[C]/Users/someuser/some/local/path/BackupDir\f[R]. +.PP +In the event of a rename due to a sync conflict, the rename is not +considered a delete, unless a previous conflict with the same name +already exists and would get overwritten. +.PP +See also: \f[C]--suffix\f[R] (https://rclone.org/docs/#suffix-suffix), +\f[C]--suffix-keep-extension\f[R] (https://rclone.org/docs/#suffix-keep-extension) .SS Operation .SS Runtime flow details .PP @@ -26493,8 +28915,10 @@ Propagate changes on \f[C]path1\f[R] to \f[C]path2\f[R], and vice-versa. Lock file prevents multiple simultaneous runs when taking a while. This can be particularly useful if bisync is run by cron scheduler. .IP \[bu] 2 -Handle change conflicts non-destructively by creating \f[C]..path1\f[R] -and \f[C]..path2\f[R] file versions. +Handle change conflicts non-destructively by creating +\f[C].conflict1\f[R], \f[C].conflict2\f[R], etc. +file versions, according to \f[C]--conflict-resolve\f[R], +\f[C]--conflict-loser\f[R], and \f[C]--conflict-suffix\f[R] settings. .IP \[bu] 2 File system access health check using \f[C]RCLONE_TEST\f[R] files (see the \f[C]--check-access\f[R] flag). @@ -26625,10 +29049,12 @@ T}@T{ File is new on Path1 AND new on Path2 (and Path1 version is NOT identical to Path2) T}@T{ -Files renamed to _Path1 and _Path2 +Conflicts handled according to \f[C]--conflict-resolve\f[R] & +\f[C]--conflict-loser\f[R] settings T}@T{ -\f[C]rclone copy\f[R] _Path2 file to Path1, \f[C]rclone copy\f[R] _Path1 -file to Path2 +default: \f[C]rclone copy\f[R] renamed \f[C]Path2.conflict2\f[R] file to +Path1, \f[C]rclone copy\f[R] renamed \f[C]Path1.conflict1\f[R] file to +Path2 T} T{ Path2 newer AND Path1 changed @@ -26636,10 +29062,12 @@ T}@T{ File is newer on Path2 AND also changed (newer/older/size) on Path1 (and Path1 version is NOT identical to Path2) T}@T{ -Files renamed to _Path1 and _Path2 +Conflicts handled according to \f[C]--conflict-resolve\f[R] & +\f[C]--conflict-loser\f[R] settings T}@T{ -\f[C]rclone copy\f[R] _Path2 file to Path1, \f[C]rclone copy\f[R] _Path1 -file to Path2 +default: \f[C]rclone copy\f[R] renamed \f[C]Path2.conflict2\f[R] file to +Path1, \f[C]rclone copy\f[R] renamed \f[C]Path1.conflict1\f[R] file to +Path2 T} T{ Path2 newer AND Path1 deleted @@ -26678,8 +29106,7 @@ new/changed on both sides), it first checks whether the Path1 and Path2 versions are currently \f[I]identical\f[R] (using the same underlying function as \f[C]check\f[R].) If bisync concludes that the files are identical, it will skip them and move on. -Otherwise, it will create renamed \f[C]..Path1\f[R] and -\f[C]..Path2\f[R] duplicates, as before. +Otherwise, it will create renamed duplicates, as before. This behavior also improves the experience of renaming directories (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=Renamed%20directories), as a \f[C]--resync\f[R] is no longer required, so long as the same @@ -26699,21 +29126,13 @@ Consider the situation carefully and perhaps use \f[C]--dry-run\f[R] before you commit to the changes. .SS Modification times .PP -Bisync relies on file timestamps to identify changed files and will -\f[I]refuse\f[R] to operate if backend lacks the modification time -support. -.PP +By default, bisync compares files by modification time and size. If you or your application should change the content of a file without -changing the modification time then bisync will \f[I]not\f[R] notice the -change, and thus will not copy it to the other side. -.PP -Note that on some cloud storage systems it is not possible to have file -timestamps that match \f[I]precisely\f[R] between the local and other -filesystems. -.PP -Bisync\[aq]s approach to this problem is by tracking the changes on each -side \f[I]separately\f[R] over time with a local database of files in -that side then applying the resulting changes on the other side. +changing the modification time and size, then bisync will \f[I]not\f[R] +notice the change, and thus will not copy it to the other side. +As an alternative, consider comparing by checksum (if your remotes +support it). +See \f[C]--compare\f[R] for details. .SS Error handling .PP Certain bisync critical errors, such as file copy/move failing, will @@ -26741,7 +29160,8 @@ Some errors are considered temporary and re-running the bisync is not blocked. The \f[I]critical return\f[R] blocks further bisync runs. .PP -See also: \f[C]--resilient\f[R] +See also: \f[C]--resilient\f[R], \f[C]--recover\f[R], +\f[C]--max-lock\f[R], Graceful Shutdown .SS Lock file .PP When bisync is running, a lock file is created in the bisync working @@ -26754,6 +29174,8 @@ The lock file effectively blocks follow-on (e.g., scheduled by \f[I]cron\f[R]) runs when the prior invocation is taking a long time. The lock file contains \f[I]PID\f[R] of the blocking process, which may help in debug. +Lock files can be set to automatically expire after a certain amount of +time, using the \f[C]--max-lock\f[R] flag. .PP \f[B]Note\f[R] that while concurrent bisync runs are allowed, \f[I]be very cautious\f[R] that there is no overlap in the trees being synched @@ -26765,86 +29187,84 @@ and general mayhem. - \f[C]0\f[R] on a successful run, - \f[C]1\f[R] for a non-critical failing run (a rerun may be successful), - \f[C]2\f[R] for a critically aborted run (requires a \f[C]--resync\f[R] to recover). +.SS Graceful Shutdown +.PP +Bisync has a \[dq]Graceful Shutdown\[dq] mode which is activated by +sending \f[C]SIGINT\f[R] or pressing \f[C]Ctrl+C\f[R] during a run. +Once triggered, bisync will use best efforts to exit cleanly before the +timer runs out. +If bisync is in the middle of transferring files, it will attempt to +cleanly empty its queue by finishing what it has started but not taking +more. +If it cannot do so within 30 seconds, it will cancel the in-progress +transfers at that point and then give itself a maximum of 60 seconds to +wrap up, save its state for next time, and exit. +With the \f[C]-vP\f[R] flags you will see constant status updates and a +final confirmation of whether or not the graceful shutdown was +successful. +.PP +At any point during the \[dq]Graceful Shutdown\[dq] sequence, a second +\f[C]SIGINT\f[R] or \f[C]Ctrl+C\f[R] will trigger an immediate, +un-graceful exit, which will leave things in a messier state. +Usually a robust recovery will still be possible if using +\f[C]--recover\f[R] mode, otherwise you will need to do a +\f[C]--resync\f[R]. +.PP +If you plan to use Graceful Shutdown mode, it is recommended to use +\f[C]--resilient\f[R] and \f[C]--recover\f[R], and it is important to +NOT use \f[C]--inplace\f[R] (https://rclone.org/docs/#inplace), +otherwise you risk leaving partially-written files on one side, which +may be confused for real files on the next run. +Note also that in the event of an abrupt interruption, a lock file will +be left behind to block concurrent runs. +You will need to delete it before you can proceed with the next run (or +wait for it to expire on its own, if using \f[C]--max-lock\f[R].) .SS Limitations .SS Supported backends .PP Bisync is considered \f[I]BETA\f[R] and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - -OneDrive - S3 - SFTP - Yandex Disk +OneDrive - S3 - SFTP - Yandex Disk - Crypt .PP It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we\[aq]ll update the list. Run the test suite to check for proper operation as described below. .PP -First release of \f[C]rclone bisync\f[R] requires that underlying -backend supports the modification time feature and will refuse to run -otherwise. -This limitation will be lifted in a future \f[C]rclone bisync\f[R] -release. +The first release of \f[C]rclone bisync\f[R] required both underlying +backends to support modification times, and refused to run otherwise. +This limitation has been lifted as of \f[C]v1.66\f[R], as bisync now +supports comparing checksum and/or size instead of (or in addition to) +modtime. +See \f[C]--compare\f[R] for details. .SS Concurrent modifications .PP -When using \f[B]Local, FTP or SFTP\f[R] remotes rclone does not create -\f[I]temporary\f[R] files at the destination when copying, and thus if -the connection is lost the created file may be corrupt, which will -likely propagate back to the original path on the next sync, resulting -in data loss. -This will be solved in a future release, there is no workaround at the -moment. +When using \f[B]Local, FTP or SFTP\f[R] remotes with +\f[C]--inplace\f[R] (https://rclone.org/docs/#inplace), rclone does not +create \f[I]temporary\f[R] files at the destination when copying, and +thus if the connection is lost the created file may be corrupt, which +will likely propagate back to the original path on the next sync, +resulting in data loss. +It is therefore recommended to \f[I]omit\f[R] \f[C]--inplace\f[R]. .PP Files that \f[B]change during\f[R] a bisync run may result in data loss. -This has been seen in a highly dynamic environment, where the filesystem -is getting hammered by running processes during the sync. -The currently recommended solution is to sync at quiet times or filter -out unnecessary directories and files. -.PP -As an alternative -approach (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=scans%2C%20to%20avoid-,errors%20if%20files%20changed%20during%20sync,-Given%20the%20number), -consider using \f[C]--check-sync=false\f[R] (and possibly -\f[C]--resilient\f[R]) to make bisync more forgiving of filesystems that -change during the sync. -Be advised that this may cause bisync to miss events that occur during a -bisync run, so it is a good idea to supplement this with a periodic -independent integrity check, and corrective sync if diffs are found. -For example, a possible sequence could look like this: -.IP "1." 3 -Normally scheduled bisync run: -.IP -.nf -\f[C] -rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -\f[R] -.fi -.IP "2." 3 -Periodic independent integrity check (perhaps scheduled nightly or -weekly): -.IP -.nf -\f[C] -rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt -\f[R] -.fi -.IP "3." 3 -If diffs are found, you have some choices to correct them. -If one side is more up-to-date and you want to make the other side match -it, you could run: -.IP -.nf -\f[C] -rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v -\f[R] -.fi -.PP -(or switch Path1 and Path2 to make Path2 the source-of-truth) -.PP -Or, if neither side is totally up-to-date, you could run a -\f[C]--resync\f[R] to bring them back into agreement (but remember that -this could cause deleted files to re-appear.) -.PP -*Note also that \f[C]rclone check\f[R] does not currently include empty -directories, so if you want to know if any empty directories are out of -sync, consider alternatively running the above \f[C]rclone sync\f[R] -command with \f[C]--dry-run\f[R] added. +Prior to \f[C]rclone v1.66\f[R], this was commonly seen in highly +dynamic environments, where the filesystem was getting hammered by +running processes during the sync. +As of \f[C]rclone v1.66\f[R], bisync was redesigned to use a +\[dq]snapshot\[dq] model, greatly reducing the risks from changes during +a sync. +Changes that are not detected during the current sync will now be +detected during the following sync, and will no longer cause the entire +run to throw a critical error. +There is additionally a mechanism to mark files as needing to be +internally rechecked next time, for added safety. +It should therefore no longer be necessary to sync only at quiet times +-- however, note that an error can still occur if a file happens to +change at the exact moment it\[aq]s being read/written by bisync (same +as would happen in \f[C]rclone sync\f[R].) (See also: +\f[C]--ignore-checksum\f[R] (https://rclone.org/docs/#ignore-checksum), +\f[C]--local-no-check-updated\f[R] (https://rclone.org/local/#local-no-check-updated)) .SS Empty directories .PP By default, new/deleted empty directories on one path are \f[I]not\f[R] @@ -26870,11 +29290,21 @@ It looks scarier than it is, but it\[aq]s still probably best to stick to one or the other, and use \f[C]--resync\f[R] when you need to switch. .SS Renamed directories .PP -Renaming a folder on the Path1 side results in deleting all files on the -Path2 side and then copying all files again from Path1 to Path2. +By default, renaming a folder on the Path1 side results in deleting all +files on the Path2 side and then copying all files again from Path1 to +Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. -Currently, the most effective and efficient method of renaming a +.PP +A recommended solution is to use +\f[C]--track-renames\f[R] (https://rclone.org/docs/#track-renames), +which is now supported in bisync as of \f[C]rclone v1.66\f[R]. +Note that \f[C]--track-renames\f[R] is not available during +\f[C]--resync\f[R], as \f[C]--resync\f[R] does not delete anything +(\f[C]--track-renames\f[R] only supports \f[C]sync\f[R], not +\f[C]copy\f[R].) +.PP +Otherwise, the most effective and efficient method of renaming a directory is to rename it to the same name on both sides. (As of \f[C]rclone v1.64\f[R], a \f[C]--resync\f[R] is no longer required after doing so, as bisync will automatically detect that Path1 @@ -26892,32 +29322,29 @@ directories (https://github.com/rclone/rclone/commit/cbf3d4356135814921382dd3285 For now, the recommended way to avoid using \f[C]--fast-list\f[R] is to add \f[C]--disable ListR\f[R] to all bisync commands. The default behavior may change in a future version. -.SS Overridden Configs +.SS Case (and unicode) sensitivity .PP -When rclone detects an overridden config, it adds a suffix like -\f[C]{ABCDE}\f[R] on the fly to the internal name of the remote. -Bisync follows suit by including this suffix in its listing filenames. -However, this suffix does not necessarily persist from run to run, -especially if different flags are provided. -So if next time the suffix assigned is \f[C]{FGHIJ}\f[R], bisync will -get confused, because it\[aq]s looking for a listing file with -\f[C]{FGHIJ}\f[R], when the file it wants has \f[C]{ABCDE}\f[R]. -As a result, it throws -\f[C]Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run\f[R] -and refuses to run again until the user runs a \f[C]--resync\f[R] -(unless using \f[C]--resilient\f[R]). -The best workaround at the moment is to set any backend-specific flags -in the config file (https://rclone.org/commands/rclone_config/) instead -of specifying them with command flags. -(You can still override them as needed for other rclone commands.) -.SS Case sensitivity +As of \f[C]v1.66\f[R], case and unicode form differences no longer cause +critical errors, and normalization (when comparing between filesystems) +is handled according to the same flags and defaults as +\f[C]rclone sync\f[R]. +See the following options (all of which are supported by bisync) to +control this behavior more granularly: - +\f[C]--fix-case\f[R] (https://rclone.org/docs/#fix-case) - +\f[C]--ignore-case-sync\f[R] (https://rclone.org/docs/#ignore-case-sync) +- +\f[C]--no-unicode-normalization\f[R] (https://rclone.org/docs/#no-unicode-normalization) +- +\f[C]--local-unicode-normalization\f[R] (https://rclone.org/local/#local-unicode-normalization) +and +\f[C]--local-case-sensitive\f[R] (https://rclone.org/local/#local-case-sensitive) +(caution: these are normally not what you want.) .PP -Synching with \f[B]case-insensitive\f[R] filesystems, such as Windows or -\f[C]Box\f[R], can result in file name conflicts. -This will be fixed in a future release. -The near-term workaround is to make sure that files on both sides -don\[aq]t have spelling case differences (\f[C]Smile.jpg\f[R] vs. -\f[C]smile.jpg\f[R]). +Note that in the (probably rare) event that \f[C]--fix-case\f[R] is used +AND a file is new/changed on both sides AND the checksums match AND the +filename case does not match, the Path1 filename is considered the +winner, for the purposes of \f[C]--fix-case\f[R] (Path2 will be renamed +to match it). .SS Windows support .PP Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on @@ -27273,27 +29700,72 @@ If the error is \f[C]This file has been identified as malware or spam and cannot be downloaded\f[R], consider using the flag --drive-acknowledge-abuse (https://rclone.org/drive/#drive-acknowledge-abuse). -.SS Google Doc files +.SS Google Docs (and other files of unknown size) .PP -Google docs exist as virtual files on Google Drive and cannot be -transferred to other filesystems natively. -While it is possible to export a Google doc to a normal file (with -\f[C].xlsx\f[R] extension, for example), it is not possible to import a -normal file back into a Google document. +As of \f[C]v1.66\f[R], Google +Docs (https://rclone.org/drive/#import-export-of-google-documents) +(including Google Sheets, Slides, etc.) are now supported in bisync, +subject to the same options, defaults, and limitations as in +\f[C]rclone sync\f[R]. +When bisyncing drive with non-drive backends, the drive -> non-drive +direction is controlled by +\f[C]--drive-export-formats\f[R] (https://rclone.org/drive/#drive-export-formats) +(default \f[C]\[dq]docx,xlsx,pptx,svg\[dq]\f[R]) and the non-drive -> +drive direction is controlled by +\f[C]--drive-import-formats\f[R] (https://rclone.org/drive/#drive-import-formats) +(default none.) .PP -Bisync\[aq]s handling of Google Doc files is to flag them in the run log -output for user\[aq]s attention and ignore them for any file transfers, -deletes, or syncs. -They will show up with a length of \f[C]-1\f[R] in the listings. -This bisync run is otherwise successful: -.IP -.nf -\f[C] -2021/05/11 08:23:15 INFO : Synching Path1 \[dq]/path/to/local/tree/base/\[dq] with Path2 \[dq]GDrive:\[dq] -2021/05/11 08:23:15 INFO : ...path2.lst-new: Ignoring incorrect line: \[dq]- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx\[dq] -2021/05/11 08:23:15 INFO : Bisync successful -\f[R] -.fi +For example, with the default export/import formats, a Google Sheet on +the drive side will be synced to an \f[C].xlsx\f[R] file on the +non-drive side. +In the reverse direction, \f[C].xlsx\f[R] files with filenames that +match an existing Google Sheet will be synced to that Google Sheet, +while \f[C].xlsx\f[R] files that do NOT match an existing Google Sheet +will be copied to drive as normal \f[C].xlsx\f[R] files (without +conversion to Sheets, although the Google Drive web browser UI may still +give you the option to open it as one.) +.PP +If \f[C]--drive-import-formats\f[R] is set (it\[aq]s not, by default), +then all of the specified formats will be converted to Google Docs, if +there is no existing Google Doc with a matching name. +Caution: such conversion can be quite lossy, and in most cases it\[aq]s +probably not what you want! +.PP +To bisync Google Docs as URL shortcut links (in a manner similar to +\[dq]Drive for Desktop\[dq]), use: \f[C]--drive-export-formats url\f[R] +(or +alternatives (https://rclone.org/drive/#exportformats:~:text=available%20Google%20Documents.-,Extension,macOS,-Standard%20options).) +.PP +Note that these link files cannot be edited on the non-drive side -- you +will get errors if you try to sync an edited link file back to drive. +They CAN be deleted (it will result in deleting the corresponding Google +Doc.) If you create a \f[C].url\f[R] file on the non-drive side that +does not match an existing Google Doc, bisyncing it will just result in +copying the literal \f[C].url\f[R] file over to drive (no Google Doc +will be created.) So, as a general rule of thumb, think of them as +read-only placeholders on the non-drive side, and make all your changes +on the drive side. +.PP +Likewise, even with other export-formats, it is best to only move/rename +Google Docs on the drive side. +This is because otherwise, bisync will interpret this as a file deleted +and another created, and accordingly, it will delete the Google Doc and +create a new file at the new path. +(Whether or not that new file is a Google Doc depends on +\f[C]--drive-import-formats\f[R].) +.PP +Lastly, take note that all Google Docs on the drive side have a size of +\f[C]-1\f[R] and no checksum. +Therefore, they cannot be reliably synced with the \f[C]--checksum\f[R] +or \f[C]--size-only\f[R] flags. +(To be exact: they will still get created/deleted, and bisync\[aq]s +delta engine will notice changes and queue them for syncing, but the +underlying sync function will consider them identical and skip them.) To +work around this, use the default (modtime and size) instead of +\f[C]--checksum\f[R] or \f[C]--size-only\f[R]. +.PP +To ignore Google Docs entirely, use +\f[C]--drive-skip-gdocs\f[R] (https://rclone.org/drive/#drive-skip-gdocs). .SS Usage examples .SS Cron .PP @@ -27789,6 +30261,77 @@ Also note a number of academic publications by Benjamin Pierce (http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization) about \f[I]Unison\f[R] and synchronization in general. .SS Changelog +.SS \f[C]v1.66\f[R] +.IP \[bu] 2 +Copies and deletes are now handled in one operation instead of two +.IP \[bu] 2 +\f[C]--track-renames\f[R] and \f[C]--backup-dir\f[R] are now supported +.IP \[bu] 2 +Partial uploads known issue on +\f[C]local\f[R]/\f[C]ftp\f[R]/\f[C]sftp\f[R] has been resolved (unless +using \f[C]--inplace\f[R]) +.IP \[bu] 2 +Final listings are now generated from sync results, to avoid needing to +re-list +.IP \[bu] 2 +Bisync is now much more resilient to changes that happen during a bisync +run, and far less prone to critical errors / undetected changes +.IP \[bu] 2 +Bisync is now capable of rolling a file listing back in cases of +uncertainty, essentially marking the file as needing to be rechecked +next time. +.IP \[bu] 2 +A few basic terminal colors are now supported, controllable with +\f[C]--color\f[R] (https://rclone.org/docs/#color-when) +(\f[C]AUTO\f[R]|\f[C]NEVER\f[R]|\f[C]ALWAYS\f[R]) +.IP \[bu] 2 +Initial listing snapshots of Path1 and Path2 are now generated +concurrently, using the same \[dq]march\[dq] infrastructure as +\f[C]check\f[R] and \f[C]sync\f[R], for performance improvements and +less risk of +error (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=4.%20Listings%20should%20alternate%20between%20paths%20to%20minimize%20errors). +.IP \[bu] 2 +Fixed handling of unicode normalization and case insensitivity, support +for \f[C]--fix-case\f[R] (https://rclone.org/docs/#fix-case), +\f[C]--ignore-case-sync\f[R], \f[C]--no-unicode-normalization\f[R] +.IP \[bu] 2 +\f[C]--resync\f[R] is now much more efficient (especially for users of +\f[C]--create-empty-src-dirs\f[R]) +.IP \[bu] 2 +Google Docs (and other files of unknown size) are now supported (with +the same options as in \f[C]sync\f[R]) +.IP \[bu] 2 +Equality checks before a sync conflict rename now fall back to +\f[C]cryptcheck\f[R] (when possible) or \f[C]--download\f[R], instead of +of \f[C]--size-only\f[R], when \f[C]check\f[R] is not available. +.IP \[bu] 2 +Bisync no longer fails to find the correct listing file when configs are +overridden with backend-specific flags. +.IP \[bu] 2 +Bisync now fully supports comparing based on any combination of size, +modtime, and checksum, lifting the prior restriction on backends without +modtime support. +.IP \[bu] 2 +Bisync now supports a \[dq]Graceful Shutdown\[dq] mode to cleanly cancel +a run early without requiring \f[C]--resync\f[R]. +.IP \[bu] 2 +New \f[C]--recover\f[R] flag allows robust recovery in the event of +interruptions, without requiring \f[C]--resync\f[R]. +.IP \[bu] 2 +A new \f[C]--max-lock\f[R] setting allows lock files to automatically +renew and expire, for better automatic recovery when a run is +interrupted. +.IP \[bu] 2 +Bisync now supports auto-resolving sync conflicts and customizing rename +behavior with new \f[C]--conflict-resolve\f[R], +\f[C]--conflict-loser\f[R], and \f[C]--conflict-suffix\f[R] flags. +.IP \[bu] 2 +A new \f[C]--resync-mode\f[R] flag allows more control over which +version of a file gets kept during a \f[C]--resync\f[R]. +.IP \[bu] 2 +Bisync now supports +\f[C]--retries\f[R] (https://rclone.org/docs/#retries-int) and +\f[C]--retries-sleep\f[R] (when \f[C]--resilient\f[R] is set.) .SS \f[C]v1.64\f[R] .IP \[bu] 2 Fixed an @@ -28299,6 +30842,19 @@ Type: Encoding .IP \[bu] 2 Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot +.SS --fichier-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_FICHIER_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP \f[C]rclone about\f[R] is not supported by the 1Fichier backend. @@ -28444,404 +31000,23 @@ Env Var: RCLONE_ALIAS_REMOTE Type: string .IP \[bu] 2 Required: true -.SH Amazon Drive -.PP -Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage -service run by Amazon for consumers. -.SS Status -.PP -\f[B]Important:\f[R] rclone supports Amazon Drive only if you have your -own set of API keys. -Unfortunately the Amazon Drive developer -program (https://developer.amazon.com/amazon-drive) is now closed to new -entries so if you don\[aq]t already have your own set of keys you will -not be able to use rclone with Amazon Drive. -.PP -For the history on why rclone no longer has a set of Amazon Drive API -keys see the -forum (https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314). -.PP -If you happen to know anyone who works at Amazon then please ask them to -re-instate rclone into the Amazon Drive developer program - thanks! -.SS Configuration -.PP -The initial setup for Amazon Drive involves getting a token from Amazon -which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -The configuration process for Amazon Drive may involve using an oauth -proxy (https://github.com/ncw/oauthproxy). -This is used to keep the Amazon credentials out of the source code. -The proxy runs in Google\[aq]s very secure App Engine environment and -doesn\[aq]t store any credentials which pass through it. -.PP -Since rclone doesn\[aq]t currently have its own Amazon Drive credentials -so you will either need to have your own \f[C]client_id\f[R] and -\f[C]client_secret\f[R] with Amazon Drive, or use a third-party oauth -proxy in which case you will need to enter \f[C]client_id\f[R], -\f[C]client_secret\f[R], \f[C]auth_url\f[R] and \f[C]token_url\f[R]. -.PP -Note also if you are not using Amazon\[aq]s \f[C]auth_url\f[R] and -\f[C]token_url\f[R], (ie you filled in something for those) then if -setting up on a remote machine you can only use the copying the config -method of -configuration (https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) -- \f[C]rclone authorize\f[R] will not work. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Amazon Drive - \[rs] \[dq]amazon cloud drive\[dq] -[snip] -Storage> amazon cloud drive -Amazon Application Client Id - required. -client_id> your client ID goes here -Amazon Application Client Secret - required. -client_secret> your client secret goes here -Auth server URL - leave blank to use Amazon\[aq]s. -auth_url> Optional auth URL -Token server url - leave blank to use Amazon\[aq]s. -token_url> Optional token URL -Remote config -Make sure your Redirect URL is set to \[dq]http://127.0.0.1:53682/\[dq] in your custom config. -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = your client ID goes here -client_secret = your client secret goes here -auth_url = Optional auth URL -token_url = Optional token URL -token = {\[dq]access_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxx\[dq],\[dq]expiry\[dq]:\[dq]2015-09-06T16:07:39.658438471+01:00\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Amazon. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP -List directories in top level of your Amazon Drive -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -List all the files in your Amazon Drive -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP -To copy a local directory to an Amazon Drive directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Modification times and hashes -.PP -Amazon Drive doesn\[aq]t allow modification times to be changed via the -API so these won\[aq]t be accurate or used for syncing. -.PP -It does support the MD5 hash algorithm, so for a more accurate sync, you -can use the \f[C]--checksum\f[R] flag. -.SS Restricted filename characters -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -NUL -T}@T{ -0x00 -T}@T{ -\[u2400] -T} -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Deleting files -.PP -Any files you delete with rclone will end up in the trash. -Amazon don\[aq]t provide an API to permanently delete files, nor to -empty the trash, so you will have to do that with one of Amazon\[aq]s -apps or via the Amazon Drive website. -As of November 17, 2016, files are automatically deleted by Amazon from -the trash after 30 days. -.SS Using with non \f[C].com\f[R] Amazon accounts -.PP -Let\[aq]s say you usually use \f[C]amazon.co.uk\f[R]. -When you authenticate with rclone it will take you to an -\f[C]amazon.com\f[R] page to log in. -Your \f[C]amazon.co.uk\f[R] email and password should work here just -fine. -.SS Standard options -.PP -Here are the Standard options specific to amazon cloud drive (Amazon -Drive). -.SS --acd-client-id -.PP -OAuth Client Id. -.PP -Leave blank normally. -.PP -Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_ACD_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --acd-client-secret -.PP -OAuth Client Secret. -.PP -Leave blank normally. -.PP -Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_ACD_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false .SS Advanced options .PP -Here are the Advanced options specific to amazon cloud drive (Amazon -Drive). -.SS --acd-token +Here are the Advanced options specific to alias (Alias for an existing +remote). +.SS --alias-description .PP -OAuth Access Token as a JSON blob. +Description of the remote .PP Properties: .IP \[bu] 2 -Config: token +Config: description .IP \[bu] 2 -Env Var: RCLONE_ACD_TOKEN +Env Var: RCLONE_ALIAS_DESCRIPTION .IP \[bu] 2 Type: string .IP \[bu] 2 Required: false -.SS --acd-auth-url -.PP -Auth server URL. -.PP -Leave blank to use the provider defaults. -.PP -Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_ACD_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --acd-token-url -.PP -Token server url. -.PP -Leave blank to use the provider defaults. -.PP -Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_ACD_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --acd-checkpoint -.PP -Checkpoint for internal polling (debug). -.PP -Properties: -.IP \[bu] 2 -Config: checkpoint -.IP \[bu] 2 -Env Var: RCLONE_ACD_CHECKPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --acd-upload-wait-per-gb -.PP -Additional time per GiB to wait after a failed complete upload to see if -it appears. -.PP -Sometimes Amazon Drive gives an error when a file has been fully -uploaded but the file appears anyway after a little while. -This happens sometimes for files over 1 GiB in size and nearly every -time for files bigger than 10 GiB. -This parameter controls the time rclone waits for the file to appear. -.PP -The default value for this parameter is 3 minutes per GiB, so by default -it will wait 3 minutes for every GiB uploaded to see if the file -appears. -.PP -You can disable this feature by setting it to 0. -This may cause conflict errors as rclone retries the failed upload but -the file will most likely appear correctly eventually. -.PP -These values were determined empirically by observing lots of uploads of -big files for a range of file sizes. -.PP -Upload with the \[dq]-v\[dq] flag to see more info about what rclone is -doing in this situation. -.PP -Properties: -.IP \[bu] 2 -Config: upload_wait_per_gb -.IP \[bu] 2 -Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 3m0s -.SS --acd-templink-threshold -.PP -Files >= this size will be downloaded via their tempLink. -.PP -Files this size or more will be downloaded via their \[dq]tempLink\[dq]. -This is to work around a problem with Amazon Drive which blocks -downloads of files bigger than about 10 GiB. -The default for this is 9 GiB which shouldn\[aq]t need to be changed. -.PP -To download files above this threshold, rclone requests a -\[dq]tempLink\[dq] which downloads the file through a temporary URL -directly from the underlying S3 storage. -.PP -Properties: -.IP \[bu] 2 -Config: templink_threshold -.IP \[bu] 2 -Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 9Gi -.SS --acd-encoding -.PP -The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP -Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_ACD_ENCODING -.IP \[bu] 2 -Type: Encoding -.IP \[bu] 2 -Default: Slash,InvalidUtf8,Dot -.SS Limitations -.PP -Note that Amazon Drive is case insensitive so you can\[aq]t have a file -called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP -Amazon Drive has rate limiting so you may notice errors in the sync (429 -errors). -rclone will automatically retry the sync up to 3 times by default (see -\f[C]--retries\f[R] flag) which should hopefully work around this -problem. -.PP -Amazon Drive has an internal limit of file sizes that can be uploaded to -the service. -This limit is not officially published, but all files larger than this -will fail. -.PP -At the time of writing (Jan 2016) is in the area of 50 GiB per file. -This means that larger files are likely to fail. -.PP -Unfortunately there is no way for rclone to see that this failure is -because of file size, so it will retry the operation, as any other -failure. -To avoid this problem, use \f[C]--max-size 50000M\f[R] option to limit -the maximum size of uploaded files. -Note that \f[C]--max-size\f[R] does not split files into segments, it -only ignores files over this size. -.PP -\f[C]rclone about\f[R] is not supported by the Amazon Drive backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) .SH Amazon S3 Storage Providers .PP The S3 backend can be used with a number of different providers: @@ -28971,7 +31146,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Liara, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -29605,6 +31780,8 @@ being written to: \f[C]PutObject\f[R] .IP \[bu] 2 \f[C]PutObjectACL\f[R] +.IP \[bu] 2 +\f[C]CreateBucket\f[R] (unless using s3-no-check-bucket) .PP When using the \f[C]lsd\f[R] subcommand, the \f[C]ListAllMyBuckets\f[R] permission is required. @@ -29650,6 +31827,10 @@ It assumes that \f[C]USER_NAME\f[R] has been created. .IP "2." 3 The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket\[aq]s objects. +.IP "3." 3 +When using s3-no-check-bucket and the bucket already exsits, the +\f[C]\[dq]arn:aws:s3:::BUCKET_NAME\[dq]\f[R] doesn\[aq]t have to be +included. .PP For reference, here\[aq]s an Ansible script (https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) @@ -31037,10 +33218,10 @@ Type: string Required: false .SS --s3-upload-concurrency .PP -Concurrency for multipart uploads. +Concurrency for multipart uploads and copies. .PP This is the number of chunks of the same file that are uploaded -concurrently. +concurrently for multipart uploads and copies. .PP If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing @@ -31097,6 +33278,22 @@ Env Var: RCLONE_S3_V2_AUTH Type: bool .IP \[bu] 2 Default: false +.SS --s3-use-dual-stack +.PP +If true use AWS S3 dual-stack endpoint (IPv6 support). +.PP +See AWS Docs on Dualstack +Endpoints (https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html) +.PP +Properties: +.IP \[bu] 2 +Config: use_dual_stack +.IP \[bu] 2 +Env Var: RCLONE_S3_USE_DUAL_STACK +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS --s3-use-accelerate-endpoint .PP If true use the AWS S3 accelerated endpoint. @@ -31448,6 +33645,27 @@ Env Var: RCLONE_S3_VERSION_AT Type: Time .IP \[bu] 2 Default: off +.SS --s3-version-deleted +.PP +Show deleted file markers when using versions. +.PP +This shows deleted file markers in the listing when using versions. +These will appear as 0 size files. +The only operation which can be performed on them is deletion. +.PP +Deleting a delete marker will reveal the previous version. +.PP +Deleted files will always show with a timestamp. +.PP +Properties: +.IP \[bu] 2 +Config: version_deleted +.IP \[bu] 2 +Env Var: RCLONE_S3_VERSION_DELETED +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS --s3-decompress .PP If set this will decompress gzip encoded objects. @@ -31620,6 +33838,19 @@ Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS Type: Tristate .IP \[bu] 2 Default: unset +.SS --s3-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_S3_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Metadata .PP User metadata is stored as x-amz-meta- keys. @@ -32424,10 +34655,10 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] -Storage> 5 +Storage> s3 Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -32561,18 +34792,11 @@ Select \[dq]s3\[dq] storage. .nf \f[C] Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \[rs] \[dq]alias\[dq] - 2 / Amazon Drive - \[rs] \[dq]amazon cloud drive\[dq] - 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, Liara, ArvanCloud, Minio, IBM COS) - \[rs] \[dq]s3\[dq] - 4 / Backblaze B2 - \[rs] \[dq]b2\[dq] [snip] - 23 / HTTP - \[rs] \[dq]http\[dq] -Storage> 3 +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \[rs] \[dq]s3\[dq] +[snip] +Storage> s3 \f[R] .fi .IP "4." 3 @@ -32767,7 +34991,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -32884,7 +35108,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -33193,15 +35417,8 @@ Select \f[C]s3\f[R] storage. .nf \f[C] Choose a number from below, or type in your own value - 1 / 1Fichier - \[rs] (fichier) - 2 / Akamai NetStorage - \[rs] (netstorage) - 3 / Alias for an existing remote - \[rs] (alias) - 4 / Amazon Drive - \[rs] (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -33510,7 +35727,7 @@ Choose \f[C]s3\f[R] backend Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -33849,7 +36066,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (\[dq]\[dq]). Choose a number from below, or type in your own value [snip] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -33966,7 +36183,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. ... - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) ... Storage> s3 @@ -34230,15 +36447,8 @@ Select \f[C]s3\f[R] storage. .nf \f[C] Choose a number from below, or type in your own value - 1 / 1Fichier - \[rs] (fichier) - 2 / Akamai NetStorage - \[rs] (netstorage) - 3 / Alias for an existing remote - \[rs] (alias) - 4 / Amazon Drive - \[rs] (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -34475,7 +36685,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others +XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others \[rs] (s3) [snip] Storage> s3 @@ -34738,13 +36948,8 @@ Select \f[C]s3\f[R] storage. .nf \f[C] Choose a number from below, or type in your own value -1 / 1Fichier - \[rs] \[dq]fichier\[dq] - 2 / Alias for an existing remote - \[rs] \[dq]alias\[dq] - 3 / Amazon Drive - \[rs] \[dq]amazon cloud drive\[dq] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -35192,7 +37397,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (\[dq]\[dq]). Choose a number from below, or type in your own value - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] \[dq]s3\[dq] Storage> s3 @@ -35844,9 +38049,12 @@ Properties: #### --b2-download-auth-duration -Time before the authorization token will expire in s or suffix ms|s|m|h|d. +Time before the public link authorization token will expire in s or suffix ms|s|m|h|d. + +This is used in combination with \[dq]rclone link\[dq] for making files +accessible to the public and sets the duration before the download +authorization token will expire. -The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. Properties: @@ -35922,6 +38130,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --b2-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_B2_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the b2 backend. @@ -36450,6 +38669,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot +#### --box-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_BOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -37119,6 +39349,17 @@ Properties: - Type: Duration - Default: 1s +#### --cache-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CACHE_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the cache backend. @@ -37594,6 +39835,17 @@ Properties: - If meta format is set to \[dq]none\[dq], rename transactions will always be used. - This method is EXPERIMENTAL, don\[aq]t use on production systems. +#### --chunker-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CHUNKER_DESCRIPTION +- Type: string +- Required: false + # Citrix ShareFile @@ -37878,6 +40130,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --sharefile-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SHAREFILE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -38436,6 +40699,22 @@ Properties: - Type: bool - Default: false +#### --crypt-strict-names + +If set, this will raise an error when crypt comes across a filename that can\[aq]t be decrypted. + +(By default, rclone will just log a NOTICE and continue as normal.) +This can happen if encrypted and unencrypted files are stored in the same +directory (which is not recommended.) It may also indicate a more serious +problem that should be investigated. + +Properties: + +- Config: strict_names +- Env Var: RCLONE_CRYPT_STRICT_NAMES +- Type: bool +- Default: false + #### --crypt-filename-encoding How to encode the encrypted filename to text string. @@ -38473,6 +40752,17 @@ Properties: - Type: string - Default: \[dq].bin\[dq] +#### --crypt-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CRYPT_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -38641,7 +40931,7 @@ encoding is modified in two ways: * we strip the padding character \[ga]=\[ga] \[ga]base32\[ga] is used rather than the more efficient \[ga]base64\[ga] so rclone can be -used on case insensitive remotes (e.g. Windows, Amazon Drive). +used on case insensitive remotes (e.g. Windows, Box, Dropbox, Onedrive etc). ### Key derivation @@ -38800,6 +41090,17 @@ Properties: - Type: SizeSuffix - Default: 20Mi +#### --compress-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_COMPRESS_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -38944,6 +41245,21 @@ Properties: - Type: SpaceSepList - Default: +### Advanced options + +Here are the Advanced options specific to combine (Combine several remotes into one). + +#### --combine-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_COMBINE_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -39394,6 +41710,17 @@ Properties: - Type: Duration - Default: 10m0s +#### --dropbox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_DROPBOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -39698,6 +42025,17 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,InvalidUtf8,Dot +#### --filefabric-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FILEFABRIC_DESCRIPTION +- Type: string +- Required: false + # FTP @@ -40124,6 +42462,17 @@ Properties: - \[dq]Ctl,LeftPeriod,Slash\[dq] - VsFTPd can\[aq]t handle file names starting with dot +#### --ftp-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FTP_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -40842,6 +43191,17 @@ Properties: - Type: Encoding - Default: Slash,CrLf,InvalidUtf8,Dot +#### --gcs-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_GCS_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -42207,10 +44567,23 @@ Properties: - \[dq]true\[dq] - Get GCP IAM credentials from the environment (env vars or IAM). +#### --drive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_DRIVE_DESCRIPTION +- Type: string +- Required: false + ### Metadata User metadata is stored in the properties field of the drive object. +Metadata is supported on files and directories. + Here are the possible system metadata items for the drive backend. | Name | Help | Type | Example | Read Only | @@ -43054,6 +45427,19 @@ T{ RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT - Type: Duration - Default: 10m0s T} T{ +#### --gphotos-description +T} +T{ +Description of the remote +T} +T{ +Properties: +T} +T{ +- Config: description - Env Var: RCLONE_GPHOTOS_DESCRIPTION - Type: +string - Required: false +T} +T{ ## Limitations T} T{ @@ -43418,6 +45804,17 @@ Properties: - Type: SizeSuffix - Default: 0 +#### --hasher-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HASHER_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -43753,6 +46150,17 @@ Properties: - Type: Encoding - Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot +#### --hdfs-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HDFS_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -44159,6 +46567,17 @@ Properties: - Type: Encoding - Default: Slash,Dot +#### --hidrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HIDRIVE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -44381,6 +46800,17 @@ Properties: - Type: bool - Default: false +#### --http-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HTTP_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the http backend. @@ -44626,6 +47056,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket +#### --imagekit-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_IMAGEKIT_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -44895,6 +47336,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot +#### --internetarchive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_INTERNETARCHIVE_DESCRIPTION +- Type: string +- Required: false + ### Metadata Metadata fields provided by Internet Archive. @@ -45339,6 +47791,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot +#### --jottacloud-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_JOTTACLOUD_DESCRIPTION +- Type: string +- Required: false + ### Metadata Jottacloud has limited support for metadata, currently an extended set of timestamps. @@ -45558,6 +48021,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --koofr-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_KOOFR_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -45712,6 +48186,21 @@ Properties: - Type: string - Required: true +### Advanced options + +Here are the Advanced options specific to linkbox (Linkbox). + +#### --linkbox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_LINKBOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -46103,6 +48592,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --mailru-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MAILRU_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -46372,6 +48872,17 @@ Properties: - Type: Encoding - Default: Slash,InvalidUtf8,Dot +#### --mega-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MEGA_DESCRIPTION +- Type: string +- Required: false + ### Process \[ga]killed\[ga] @@ -46450,6 +48961,21 @@ The memory backend replaces the [default restricted characters set](https://rclone.org/overview/#restricted-characters). +### Advanced options + +Here are the Advanced options specific to memory (In memory object storage system.). + +#### --memory-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MEMORY_DESCRIPTION +- Type: string +- Required: false + # Akamai NetStorage @@ -46698,6 +49224,17 @@ Properties: - \[dq]https\[dq] - HTTPS protocol +#### --netstorage-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_NETSTORAGE_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the netstorage backend. @@ -47545,6 +50082,35 @@ Properties: - Type: bool - Default: false +#### --azureblob-delete-snapshots + +Set to specify how to deal with snapshots on blob deletion. + +Properties: + +- Config: delete_snapshots +- Env Var: RCLONE_AZUREBLOB_DELETE_SNAPSHOTS +- Type: string +- Required: false +- Choices: + - \[dq]\[dq] + - By default, the delete operation fails if a blob has snapshots + - \[dq]include\[dq] + - Specify \[aq]include\[aq] to remove the root blob and all its snapshots + - \[dq]only\[dq] + - Specify \[aq]only\[aq] to remove only the snapshots but keep the root blob. + +#### --azureblob-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_AZUREBLOB_DESCRIPTION +- Type: string +- Required: false + ### Custom upload headers @@ -48266,6 +50832,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot +#### --azurefiles-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_AZUREFILES_DESCRIPTION +- Type: string +- Required: false + ### Custom upload headers @@ -48899,7 +51476,7 @@ Properties: If set rclone will use delta listing to implement recursive listings. -If this flag is set the the onedrive backend will advertise \[ga]ListR\[ga] +If this flag is set the onedrive backend will advertise \[ga]ListR\[ga] support for recursive listings. Setting this flag speeds up these things greatly: @@ -48932,6 +51509,30 @@ Properties: - Type: bool - Default: false +#### --onedrive-metadata-permissions + +Control whether permissions should be read or written in metadata. + +Reading permissions metadata from files can be done quickly, but it +isn\[aq]t always desirable to set the permissions from the metadata. + + +Properties: + +- Config: metadata_permissions +- Env Var: RCLONE_ONEDRIVE_METADATA_PERMISSIONS +- Type: Bits +- Default: off +- Examples: + - \[dq]off\[dq] + - Do not read or write the value + - \[dq]read\[dq] + - Read the value only + - \[dq]write\[dq] + - Write the value only + - \[dq]read,write\[dq] + - Read and Write the value. + #### --onedrive-encoding The encoding for the backend. @@ -48945,1609 +51546,2748 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --onedrive-description - -## Limitations - -If you don\[aq]t use rclone for 90 days the refresh token will -expire. This will result in authorization problems. This is easy to -fix by running the \[ga]rclone config reconnect remote:\[ga] command to get a -new token and refresh token. - -### Naming - -Note that OneDrive is case insensitive so you can\[aq]t have a -file called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. - -There are quite a few characters that can\[aq]t be in OneDrive file -names. These can\[aq]t occur on Windows platforms, but on non-Windows -platforms they are common. Rclone will map these names to and from an -identical looking unicode equivalent. For example if a file has a \[ga]?\[ga] -in it will be mapped to \[ga]\[uFF1F]\[ga] instead. - -### File sizes - -The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). - -### Path length - -The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. - -### Number of files - -OneDrive seems to be OK with at least 50,000 files in a folder, but at -100,000 rclone will get errors listing the directory like \[ga]couldn\[cq]t -list files: UnknownError:\[ga]. See -[#2707](https://github.com/rclone/rclone/issues/2707) for more info. - -An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). - -## Versions - -Every change in a file OneDrive causes the service to create a new -version of the file. This counts against a users quota. For -example changing the modification time of a file creates a second -version, so the file apparently uses twice the space. - -For example the \[ga]copy\[ga] command is affected by this as rclone copies -the file and then afterwards sets the modification time to match the -source file which uses another version. - -You can use the \[ga]rclone cleanup\[ga] command (see below) to remove all old -versions. - -Or you can set the \[ga]no_versions\[ga] parameter to \[ga]true\[ga] and rclone will -remove versions after operations which create new versions. This takes -extra transactions so only enable it if you need it. - -**Note** At the time of writing Onedrive Personal creates versions -(but not for setting the modification time) but the API for removing -them returns \[dq]API not found\[dq] so cleanup and \[ga]no_versions\[ga] should not -be used on Onedrive Personal. - -### Disabling versioning - -Starting October 2018, users will no longer be able to -disable versioning by default. This is because Microsoft has brought -an -[update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) -to the mechanism. To change this new default setting, a PowerShell -command is required to be run by a SharePoint admin. If you are an -admin, you can run these commands in PowerShell to change that -setting: - -1. \[ga]Install-Module -Name Microsoft.Online.SharePoint.PowerShell\[ga] (in case you haven\[aq]t installed this already) -2. \[ga]Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking\[ga] -3. \[ga]Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU\[at]YOURSITE.COM\[ga] (replacing \[ga]YOURSITE\[ga], \[ga]YOU\[ga], \[ga]YOURSITE.COM\[ga] with the actual values; this will prompt for your credentials) -4. \[ga]Set-SPOTenant -EnableMinimumVersionRequirement $False\[ga] -5. \[ga]Disconnect-SPOService\[ga] (to disconnect from the server) - -*Below are the steps for normal users to disable versioning. If you don\[aq]t see the \[dq]No Versioning\[dq] option, make sure the above requirements are met.* - -User [Weropol](https://github.com/Weropol) has found a method to disable -versioning on OneDrive - -1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. -2. Click Site settings. -3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. -4. Click Customize \[dq]Documents\[dq]. -5. Click General Settings > Versioning Settings. -6. Under Document Version History select the option No versioning. -Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. -7. Apply the changes by clicking OK. -8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) -9. Restore the versioning settings after using rclone. (Optional) - -## Cleanup - -OneDrive supports \[ga]rclone cleanup\[ga] which causes rclone to look through -every file under the path supplied and delete all version but the -current version. Because this involves traversing all the files, then -querying each file for versions it can be quite slow. Rclone does -\[ga]--checkers\[ga] tests in parallel. The command also supports \[ga]--interactive\[ga]/\[ga]i\[ga] -or \[ga]--dry-run\[ga] which is a great way to see what it would do. - - rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir - rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir - -**NB** Onedrive personal can\[aq]t currently delete versions - -## Troubleshooting ## - -### Excessive throttling or blocked on SharePoint - -If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: \[ga]--user-agent \[dq]ISV|rclone.org|rclone/v1.55.1\[dq]\[ga] - -The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) - -### Unexpected file size/hash differences on Sharepoint #### - -It is a -[known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) -issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies -uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and -hash checks to fail. There are also other situations that will cause OneDrive to -report inconsistent file sizes. To use rclone with such -affected files on Sharepoint, you -may disable these checks with the following command line arguments: -\f[R] -.fi -.PP ---ignore-checksum --ignore-size -.IP -.nf -\f[C] -Alternatively, if you have write access to the OneDrive files, it may be possible -to fix this problem for certain files, by attempting the steps below. -Open the web interface for [OneDrive](https://onedrive.live.com) and find the -affected files (which will be in the error messages/log for rclone). Simply click on -each of these files, causing OneDrive to open them on the web. This will cause each -file to be converted in place to a format that is functionally equivalent -but which will no longer trigger the size discrepancy. Once all problematic files -are converted you will no longer need the ignore options above. - -### Replacing/deleting existing files on Sharepoint gets \[dq]item not found\[dq] #### - -It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue -that Sharepoint (not OneDrive or OneDrive for Business) may return \[dq]item not -found\[dq] errors when users try to replace or delete uploaded files; this seems to -mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use -the \[ga]--backup-dir \[ga] command line argument so rclone moves the -files to be replaced/deleted into a given backup directory (instead of directly -replacing/deleting them). For example, to instruct rclone to move the files into -the directory \[ga]rclone-backup-dir\[ga] on backend \[ga]mysharepoint\[ga], you may use: -\f[R] -.fi -.PP ---backup-dir mysharepoint:rclone-backup-dir -.IP -.nf -\f[C] -### access\[rs]_denied (AADSTS65005) #### -\f[R] -.fi -.PP -Error: access_denied Code: AADSTS65005 Description: Using application -\[aq]rclone\[aq] is currently not supported for your organization -[YOUR_ORGANIZATION] because it is in an unmanaged state. -An administrator needs to claim ownership of the company by DNS -validation of [YOUR_ORGANIZATION] before the application rclone can be -provisioned. -.IP -.nf -\f[C] -This means that rclone can\[aq]t use the OneDrive for Business API with your account. You can\[aq]t do much about it, maybe write an email to your admins. - -However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint - -### invalid\[rs]_grant (AADSTS50076) #### -\f[R] -.fi -.PP -Error: invalid_grant Code: AADSTS50076 Description: Due to a -configuration change made by your administrator, or because you moved to -a new location, you must use multi-factor authentication to access -\[aq]...\[aq]. -.IP -.nf -\f[C] -If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run \[ga]rclone config\[ga], and choose to edit your OneDrive backend. Then, you don\[aq]t need to actually make any changes until you reach this question: \[ga]Already have a token - refresh?\[ga]. For this question, answer \[ga]y\[ga] and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. - -### Invalid request when making public links #### - -On Sharepoint and OneDrive for Business, \[ga]rclone link\[ga] may return an \[dq]Invalid -request\[dq] error. A possible cause is that the organisation admin didn\[aq]t allow -public links to be made for the organisation/sharepoint library. To fix the -permissions as an admin, take a look at the docs: -[1](https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off), -[2](https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3). - -### Can not access \[ga]Shared\[ga] with me files - -Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: - -1. Visit [https://onedrive.live.com](https://onedrive.live.com/) -2. Right click a item in \[ga]Shared\[ga], then click \[ga]Add shortcut to My files\[ga] in the context - ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png \[dq]Screenshot (Shared with me)\[dq]) -3. The shortcut will appear in \[ga]My files\[ga], you can access it with rclone, it behaves like a normal folder/file. - ![in_my_files](https://i.imgur.com/0S8H3li.png \[dq]Screenshot (My Files)\[dq]) - ![rclone_mount](https://i.imgur.com/2Iq66sW.png \[dq]Screenshot (rclone mount)\[dq]) - -### Live Photos uploaded from iOS (small video clips in .heic files) - -The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) -of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. -The usage and download of these uploaded Live Photos is unfortunately still work-in-progress -and this introduces several issues when copying, synchronising and mounting \[en] both in rclone and in the native OneDrive client on Windows. - -The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. -Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. -The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive. - -The different sizes will cause \[ga]rclone copy/sync\[ga] to repeatedly recopy unmodified photos something like this: - - DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) - DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK - INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) - -These recopies can be worked around by adding \[ga]--ignore-size\[ga]. Please note that this workaround only syncs the still-picture not the movie clip, -and relies on modification dates being correctly updated on all files in all situations. - -The different sizes will also cause \[ga]rclone check\[ga] to report size errors something like this: - - ERROR : 20230203_123826234_iOS.heic: sizes differ - -These check errors can be suppressed by adding \[ga]--ignore-size\[ga]. - -The different sizes will also cause \[ga]rclone mount\[ga] to fail downloading with an error something like this: - - ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF - -or like this when using \[ga]--cache-mode=full\[ga]: - - INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - -# OpenDrive - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi -.IP "n)" 3 -New remote -.IP "o)" 3 -Delete remote -.IP "p)" 3 -Quit config e/n/d/q> n name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / -OpenDrive \ \[dq]opendrive\[dq] [snip] Storage> opendrive Username -username> Password -.IP "q)" 3 -Yes type in my own password -.IP "r)" 3 -Generate random password y/g> y Enter the password: password: Confirm -the password: password: -------------------- [remote] username = -password = *** ENCRYPTED *** -------------------- -.IP "s)" 3 -Yes this is OK -.IP "t)" 3 -Edit this remote -.IP "u)" 3 -Delete this remote y/e/d> y -.IP -.nf -\f[C] -List directories in top level of your OpenDrive - - rclone lsd remote: - -List all the files in your OpenDrive - - rclone ls remote: - -To copy a local directory to an OpenDrive directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - -OpenDrive allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. - -The MD5 hash algorithm is supported. - -### Restricted filename characters - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| NUL | 0x00 | \[u2400] | -| / | 0x2F | \[uFF0F] | -| \[dq] | 0x22 | \[uFF02] | -| * | 0x2A | \[uFF0A] | -| : | 0x3A | \[uFF1A] | -| < | 0x3C | \[uFF1C] | -| > | 0x3E | \[uFF1E] | -| ? | 0x3F | \[uFF1F] | -| \[rs] | 0x5C | \[uFF3C] | -| \[rs]| | 0x7C | \[uFF5C] | - -File names can also not begin or end with the following characters. -These only get replaced if they are the first or last character in the name: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| SP | 0x20 | \[u2420] | -| HT | 0x09 | \[u2409] | -| LF | 0x0A | \[u240A] | -| VT | 0x0B | \[u240B] | -| CR | 0x0D | \[u240D] | - - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - -Here are the Standard options specific to opendrive (OpenDrive). - -#### --opendrive-username - -Username. +Description of the remote Properties: -- Config: username -- Env Var: RCLONE_OPENDRIVE_USERNAME -- Type: string -- Required: true - -#### --opendrive-password - -Password. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - -Properties: - -- Config: password -- Env Var: RCLONE_OPENDRIVE_PASSWORD -- Type: string -- Required: true - -### Advanced options - -Here are the Advanced options specific to opendrive (OpenDrive). - -#### --opendrive-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_OPENDRIVE_ENCODING -- Type: Encoding -- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot - -#### --opendrive-chunk-size - -Files will be uploaded in chunks this size. - -Note that these chunks are buffered in memory so increasing them will -increase memory use. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE -- Type: SizeSuffix -- Default: 10Mi - - - -## Limitations - -Note that OpenDrive is case insensitive so you can\[aq]t have a -file called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. - -There are quite a few characters that can\[aq]t be in OpenDrive file -names. These can\[aq]t occur on Windows platforms, but on non-Windows -platforms they are common. Rclone will map these names to and from an -identical looking unicode equivalent. For example if a file has a \[ga]?\[ga] -in it will be mapped to \[ga]\[uFF1F]\[ga] instead. - -\[ga]rclone about\[ga] is not supported by the OpenDrive backend. Backends without -this capability cannot determine free space for an rclone mount or -use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union -remote. - -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - -# Oracle Object Storage -- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) -- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) -- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) - -Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] command.) You may put subdirectories in -too, e.g. \[ga]remote:bucket/path/to/dir\[ga]. - -Sample command to transfer local artifacts to remote:bucket in oracle object storage: - -\[ga]rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv\[ga] - -## Configuration - -Here is an example of making an oracle object storage configuration. \[ga]rclone config\[ga] walks you -through it. - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: - -\f[R] -.fi -.IP "n)" 3 -New remote -.IP "o)" 3 -Delete remote -.IP "p)" 3 -Rename remote -.IP "q)" 3 -Copy remote -.IP "r)" 3 -Set configuration password -.IP "s)" 3 -Quit config e/n/d/r/c/s/q> n -.PP -Enter name for new remote. -name> remote -.PP -Option Storage. -Type of storage to configure. -Choose a number from below, or type in your own value. -[snip] XX / Oracle Cloud Infrastructure Object Storage -\ (oracleobjectstorage) Storage> oracleobjectstorage -.PP -Option provider. -Choose your Auth Provider Choose a number from below, or type in your -own string value. -Press Enter for the default (env_auth). -1 / automatically pickup the credentials from runtime(env), first one to -provide auth wins \ (env_auth) / use an OCI user and an API key for -authentication. -2 | you\[cq]ll need to put in a config file your tenancy OCID, user -OCID, region, the path, fingerprint to an API key. -| https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm -\ (user_principal_auth) / use instance principals to authorize an -instance to make API calls. -3 | each instance has its own identity, and authenticates using the -certificates that are read from instance metadata. -| -https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm -\ (instance_principal_auth) 4 / use resource principals to make API -calls \ (resource_principal_auth) 5 / no credentials needed, this is -typically for reading public buckets \ (no_auth) provider> 2 -.PP -Option namespace. -Object storage namespace Enter a value. -namespace> idbamagbg734 -.PP -Option compartment. -Object storage compartment OCID Enter a value. -compartment> -ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba -.PP -Option region. -Object storage Region Enter a value. -region> us-ashburn-1 -.PP -Option endpoint. -Endpoint for Object storage API. -Leave blank to use the default endpoint for the region. -Enter a value. -Press Enter to leave empty. -endpoint> -.PP -Option config_file. -Full Path to OCI config file Choose a number from below, or type in your -own string value. -Press Enter for the default (\[ti]/.oci/config). -1 / oci configuration file location \ (\[ti]/.oci/config) config_file> -/etc/oci/dev.conf -.PP -Option config_profile. -Profile name inside OCI config file Choose a number from below, or type -in your own string value. -Press Enter for the default (Default). -1 / Use the default profile \ (Default) config_profile> Test -.PP -Edit advanced config? -y) Yes n) No (default) y/n> n -.PP -Configuration complete. -Options: - type: oracleobjectstorage - namespace: idbamagbg734 - -compartment: -ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba -- region: us-ashburn-1 - provider: user_principal_auth - config_file: -/etc/oci/dev.conf - config_profile: Test Keep this \[dq]remote\[dq] -remote? -y) Yes this is OK (default) e) Edit this remote d) Delete this remote -y/e/d> y -.IP -.nf -\f[C] -See all buckets - - rclone lsd remote: - -Create a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - rclone ls remote:bucket --max-depth 1 - -## Authentication Providers - -OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication -methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) -These choices can be specified in the rclone config file. - -Rclone supports the following OCI authentication provider. - - User Principal - Instance Principal - Resource Principal - No authentication - -### User Principal - -Sample rclone config file for Authentication Provider User Principal: - - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = user_principal_auth - config_file = /home/opc/.oci/config - config_profile = Default - -Advantages: -- One can use this method from any server within OCI or on-premises or from other cloud provider. - -Considerations: -- you need to configure user\[cq]s privileges / policy to allow access to object storage -- Overhead of managing users and keys. -- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user\[aq]s credentials. - -### Instance Principal - -An OCI compute instance can be authorized to use rclone by using it\[aq]s identity and certificates as an instance principal. -With this approach no credentials have to be stored and managed. - -Sample rclone configuration file for Authentication Provider Instance Principal: - - [opc\[at]rclone \[ti]]$ cat \[ti]/.config/rclone/rclone.conf - [oos] - type = oracleobjectstorage - namespace = idfn - compartment = ocid1.compartment.oc1..aak7a - region = us-ashburn-1 - provider = instance_principal_auth - -Advantages: - -- With instance principals, you don\[aq]t need to configure user credentials and transfer/ save it to disk in your compute - instances or rotate the credentials. -- You don\[cq]t need to deal with users and keys. -- Greatly helps in automation as you don\[aq]t have to manage access keys, user private keys, storing them in vault, - using kms etc. - -Considerations: - -- You need to configure a dynamic group having this instance as member and add policy to read object storage to that - dynamic group. -- Everyone who has access to this machine can execute the CLI commands. -- It is applicable for oci compute instances only. It cannot be used on external instance or resources. - -### Resource Principal - -Resource principal auth is very similar to instance principal auth but used for resources that are not -compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). -To use resource principal ensure Rclone process is started with these environment variables set in its process. - - export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 - export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 - export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem - export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token - -Sample rclone configuration file for Authentication Provider Resource Principal: - - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = resource_principal_auth - -### No authentication - -Public buckets do not require any authentication mechanism to read objects. -Sample rclone configuration file for No authentication: - - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = no_auth - -### Modification times and hashes - -The modification time is stored as metadata on the object as -\[ga]opc-meta-mtime\[ga] as floating point since the epoch, accurate to 1 ns. - -If the modification time needs to be updated rclone will attempt to perform a server -side copy to update the modification if the object can be copied in a single part. -In the case the object is larger than 5Gb, the object will be uploaded rather than copied. - -Note that reading this from the object takes an additional \[ga]HEAD\[ga] request as the metadata -isn\[aq]t returned in object listings. - -The MD5 hash algorithm is supported. - -### Multipart uploads - -rclone supports multipart uploads with OOS which means that it can -upload files bigger than 5 GiB. - -Note that files uploaded *both* with multipart upload *and* through -crypt remotes do not have MD5 sums. - -rclone switches from single part uploads to multipart uploads at the -point specified by \[ga]--oos-upload-cutoff\[ga]. This can be a maximum of 5 GiB -and a minimum of 0 (ie always upload multipart files). - -The chunk sizes used in the multipart upload are specified by -\[ga]--oos-chunk-size\[ga] and the number of chunks uploaded concurrently is -specified by \[ga]--oos-upload-concurrency\[ga]. - -Multipart uploads will use \[ga]--transfers\[ga] * \[ga]--oos-upload-concurrency\[ga] * -\[ga]--oos-chunk-size\[ga] extra memory. Single part uploads to not use extra -memory. - -Single part transfers can be faster than multipart transfers or slower -depending on your latency from oos - the more latency, the more likely -single part transfers will be faster. - -Increasing \[ga]--oos-upload-concurrency\[ga] will increase throughput (8 would -be a sensible value) and increasing \[ga]--oos-chunk-size\[ga] also increases -throughput (16M would be sensible). Increasing either of these will -use more memory. The default values are high enough to gain most of -the possible performance without using too much memory. - - -### Standard options - -Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). - -#### --oos-provider - -Choose your Auth Provider - -Properties: - -- Config: provider -- Env Var: RCLONE_OOS_PROVIDER -- Type: string -- Default: \[dq]env_auth\[dq] -- Examples: - - \[dq]env_auth\[dq] - - automatically pickup the credentials from runtime(env), first one to provide auth wins - - \[dq]user_principal_auth\[dq] - - use an OCI user and an API key for authentication. - - you\[cq]ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. - - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - - \[dq]instance_principal_auth\[dq] - - use instance principals to authorize an instance to make API calls. - - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - - \[dq]resource_principal_auth\[dq] - - use resource principals to make API calls - - \[dq]no_auth\[dq] - - no credentials needed, this is typically for reading public buckets - -#### --oos-namespace - -Object storage namespace - -Properties: - -- Config: namespace -- Env Var: RCLONE_OOS_NAMESPACE -- Type: string -- Required: true - -#### --oos-compartment - -Object storage compartment OCID - -Properties: - -- Config: compartment -- Env Var: RCLONE_OOS_COMPARTMENT -- Provider: !no_auth -- Type: string -- Required: true - -#### --oos-region - -Object storage Region - -Properties: - -- Config: region -- Env Var: RCLONE_OOS_REGION -- Type: string -- Required: true - -#### --oos-endpoint - -Endpoint for Object storage API. - -Leave blank to use the default endpoint for the region. - -Properties: - -- Config: endpoint -- Env Var: RCLONE_OOS_ENDPOINT +- Config: description +- Env Var: RCLONE_ONEDRIVE_DESCRIPTION - Type: string - Required: false -#### --oos-config-file - -Path to OCI config file - -Properties: - -- Config: config_file -- Env Var: RCLONE_OOS_CONFIG_FILE -- Provider: user_principal_auth -- Type: string -- Default: \[dq]\[ti]/.oci/config\[dq] -- Examples: - - \[dq]\[ti]/.oci/config\[dq] - - oci configuration file location - -#### --oos-config-profile - -Profile name inside the oci config file - -Properties: - -- Config: config_profile -- Env Var: RCLONE_OOS_CONFIG_PROFILE -- Provider: user_principal_auth -- Type: string -- Default: \[dq]Default\[dq] -- Examples: - - \[dq]Default\[dq] - - Use the default profile - -### Advanced options - -Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). - -#### --oos-storage-tier - -The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm - -Properties: - -- Config: storage_tier -- Env Var: RCLONE_OOS_STORAGE_TIER -- Type: string -- Default: \[dq]Standard\[dq] -- Examples: - - \[dq]Standard\[dq] - - Standard storage tier, this is the default tier - - \[dq]InfrequentAccess\[dq] - - InfrequentAccess storage tier - - \[dq]Archive\[dq] - - Archive storage tier - -#### --oos-upload-cutoff - -Cutoff for switching to chunked upload. - -Any files larger than this will be uploaded in chunks of chunk_size. -The minimum is 0 and the maximum is 5 GiB. - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_OOS_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 200Mi - -#### --oos-chunk-size - -Chunk size to use for uploading. - -When uploading files larger than upload_cutoff or files with unknown -size (e.g. from \[dq]rclone rcat\[dq] or uploaded with \[dq]rclone mount\[dq] they will be uploaded -as multipart uploads using this chunk size. - -Note that \[dq]upload_concurrency\[dq] chunks of this size are buffered -in memory per transfer. - -If you are transferring large files over high-speed links and you have -enough memory, then increasing this will speed up the transfers. - -Rclone will automatically increase the chunk size when uploading a -large file of known size to stay below the 10,000 chunks limit. - -Files of unknown size are uploaded with the configured -chunk_size. Since the default chunk size is 5 MiB and there can be at -most 10,000 chunks, this means that by default the maximum size of -a file you can stream upload is 48 GiB. If you wish to stream upload -larger files then you will need to increase chunk_size. - -Increasing the chunk size decreases the accuracy of the progress -statistics displayed with \[dq]-P\[dq] flag. - - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_OOS_CHUNK_SIZE -- Type: SizeSuffix -- Default: 5Mi - -#### --oos-max-upload-parts - -Maximum number of parts in a multipart upload. - -This option defines the maximum number of multipart chunks to use -when doing a multipart upload. - -OCI has max parts limit of 10,000 chunks. - -Rclone will automatically increase the chunk size when uploading a -large file of a known size to stay below this number of chunks limit. - - -Properties: - -- Config: max_upload_parts -- Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS -- Type: int -- Default: 10000 - -#### --oos-upload-concurrency - -Concurrency for multipart uploads. - -This is the number of chunks of the same file that are uploaded -concurrently. - -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. - -Properties: - -- Config: upload_concurrency -- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY -- Type: int -- Default: 10 - -#### --oos-copy-cutoff - -Cutoff for switching to multipart copy. - -Any files larger than this that need to be server-side copied will be -copied in chunks of this size. - -The minimum is 0 and the maximum is 5 GiB. - -Properties: - -- Config: copy_cutoff -- Env Var: RCLONE_OOS_COPY_CUTOFF -- Type: SizeSuffix -- Default: 4.656Gi - -#### --oos-copy-timeout - -Timeout for copy. - -Copy is an asynchronous operation, specify timeout to wait for copy to succeed - - -Properties: - -- Config: copy_timeout -- Env Var: RCLONE_OOS_COPY_TIMEOUT -- Type: Duration -- Default: 1m0s - -#### --oos-disable-checksum - -Don\[aq]t store MD5 checksum with object metadata. - -Normally rclone will calculate the MD5 checksum of the input before -uploading it so it can add it to metadata on the object. This is great -for data integrity checking but can cause long delays for large files -to start uploading. - -Properties: - -- Config: disable_checksum -- Env Var: RCLONE_OOS_DISABLE_CHECKSUM -- Type: bool -- Default: false - -#### --oos-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_OOS_ENCODING -- Type: Encoding -- Default: Slash,InvalidUtf8,Dot - -#### --oos-leave-parts-on-error - -If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery. - -It should be set to true for resuming uploads across different sessions. - -WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add -additional costs if not cleaned up. - - -Properties: - -- Config: leave_parts_on_error -- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR -- Type: bool -- Default: false - -#### --oos-attempt-resume-upload - -If true attempt to resume previously started multipart upload for the object. -This will be helpful to speed up multipart transfers by resuming uploads from past session. - -WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is -aborted and a new multipart upload is started with the new chunk size. - -The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully. - - -Properties: - -- Config: attempt_resume_upload -- Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD -- Type: bool -- Default: false - -#### --oos-no-check-bucket - -If set, don\[aq]t attempt to check the bucket exists or create it. - -This can be useful when trying to minimise the number of transactions -rclone does if you know the bucket exists already. - -It can also be needed if the user you are using does not have bucket -creation permissions. - - -Properties: - -- Config: no_check_bucket -- Env Var: RCLONE_OOS_NO_CHECK_BUCKET -- Type: bool -- Default: false - -#### --oos-sse-customer-key-file - -To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated -with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.\[aq] - -Properties: - -- Config: sse_customer_key_file -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - -#### --oos-sse-customer-key - -To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to -encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is -needed. For more information, see Using Your Own Keys for Server-Side Encryption -(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) - -Properties: - -- Config: sse_customer_key -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - -#### --oos-sse-customer-key-sha256 - -If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption -key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for -Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). - -Properties: - -- Config: sse_customer_key_sha256 -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - -#### --oos-sse-kms-key-id - -if using your own master key in vault, this header specifies the -OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call -the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. -Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. - -Properties: - -- Config: sse_kms_key_id -- Env Var: RCLONE_OOS_SSE_KMS_KEY_ID -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - -#### --oos-sse-customer-algorithm - -If using SSE-C, the optional header that specifies \[dq]AES256\[dq] as the encryption algorithm. -Object Storage supports \[dq]AES256\[dq] as the encryption algorithm. For more information, see -Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). - -Properties: - -- Config: sse_customer_algorithm -- Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - - \[dq]AES256\[dq] - - AES256 - -## Backend commands - -Here are the commands specific to the oracleobjectstorage backend. - -Run them with - - rclone backend COMMAND remote: - -The help below will explain what arguments each command takes. - -See the [backend](https://rclone.org/commands/rclone_backend/) command for more -info on how to pass options and arguments. - -These can be run on a running backend using the rc command -[backend/command](https://rclone.org/rc/#backend-command). - -### rename - -change the name of an object - - rclone backend rename remote: [options] [+] - -This command can be used to rename a object. - -Usage Examples: - - rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name - - -### list-multipart-uploads - -List the unfinished multipart uploads - - rclone backend list-multipart-uploads remote: [options] [+] - -This command lists the unfinished multipart uploads in JSON format. - - rclone backend list-multipart-uploads oos:bucket/path/to/object - -It returns a dictionary of buckets with values as lists of unfinished -multipart uploads. - -You can call it with no bucket in which case it lists all bucket, with -a bucket or with a bucket and path. - +### Metadata + +OneDrive supports System Metadata (not User Metadata, as of this writing) for +both files and directories. Much of the metadata is read-only, and there are some +differences between OneDrive Personal and Business (see table below for +details). + +Permissions are also supported, if \[ga]--onedrive-metadata-permissions\[ga] is set. The +accepted values for \[ga]--onedrive-metadata-permissions\[ga] are \[ga]read\[ga], \[ga]write\[ga], +\[ga]read,write\[ga], and \[ga]off\[ga] (the default). \[ga]write\[ga] supports adding new permissions, +updating the \[dq]role\[dq] of existing permissions, and removing permissions. Updating +and removing require the Permission ID to be known, so it is recommended to use +\[ga]read,write\[ga] instead of \[ga]write\[ga] if you wish to update/remove permissions. + +Permissions are read/written in JSON format using the same schema as the +[OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online), +which differs slightly between OneDrive Personal and Business. + +Example for OneDrive Personal: +\[ga]\[ga]\[ga]json +[ { - \[dq]test-bucket\[dq]: [ - { - \[dq]namespace\[dq]: \[dq]test-namespace\[dq], - \[dq]bucket\[dq]: \[dq]test-bucket\[dq], - \[dq]object\[dq]: \[dq]600m.bin\[dq], - \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq], - \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq], - \[dq]storageTier\[dq]: \[dq]Standard\[dq] - } - ] - - -### cleanup - -Remove unfinished multipart uploads. - - rclone backend cleanup remote: [options] [+] - -This command removes unfinished multipart uploads of age greater than -max-age which defaults to 24 hours. - -Note that you can use --interactive/-i or --dry-run with this command to see what -it would do. - - rclone backend cleanup oos:bucket/path/to/object - rclone backend cleanup -o max-age=7w oos:bucket/path/to/object - -Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - - -Options: - -- \[dq]max-age\[dq]: Max age of upload to delete - - - -## Tutorials -### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/) - -# QingStor - -Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] -command.) You may put subdirectories in too, e.g. \[ga]remote:bucket/path/to/dir\[ga]. - -## Configuration - -Here is an example of making an QingStor configuration. First run - - rclone config - -This will guide you through an interactive setup process. + \[dq]id\[dq]: \[dq]1234567890ABC!123\[dq], + \[dq]grantedTo\[dq]: { + \[dq]user\[dq]: { + \[dq]id\[dq]: \[dq]ryan\[at]contoso.com\[dq] + }, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + }, + \[dq]invitation\[dq]: { + \[dq]email\[dq]: \[dq]ryan\[at]contoso.com\[dq] + }, + \[dq]link\[dq]: { + \[dq]webUrl\[dq]: \[dq]https://1drv.ms/t/s!1234567890ABC\[dq] + }, + \[dq]roles\[dq]: [ + \[dq]read\[dq] + ], + \[dq]shareId\[dq]: \[dq]s!1234567890ABC\[dq] + } +] \f[R] .fi .PP -No remotes found, make a new one? -n) New remote r) Rename remote c) Copy remote s) Set configuration -password q) Quit config n/r/c/s/q> n name> remote Type of storage to -configure. -Choose a number from below, or type in your own value [snip] XX / -QingStor Object Storage \ \[dq]qingstor\[dq] [snip] Storage> qingstor -Get QingStor credentials from runtime. -Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own value 1 / Enter QingStor -credentials in the next step \ \[dq]false\[dq] 2 / Get QingStor -credentials from the environment (env vars or IAM) \ \[dq]true\[dq] -env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or -runtime credentials. -access_key_id> access_key QingStor Secret Access Key (password) - leave -blank for anonymous access or runtime credentials. -secret_access_key> secret_key Enter an endpoint URL to connection -QingStor API. -Leave blank will use the default value -\[dq]https://qingstor.com:443\[dq] endpoint> Zone connect to. -Default is \[dq]pek3a\[dq]. -Choose a number from below, or type in your own value / The Beijing -(China) Three Zone 1 | Needs location constraint pek3a. -\ \[dq]pek3a\[dq] / The Shanghai (China) First Zone 2 | Needs location -constraint sh1a. -\ \[dq]sh1a\[dq] zone> 1 Number of connection retry. -Leave blank will use the default value \[dq]3\[dq]. -connection_retries> Remote config -------------------- [remote] env_auth -= false access_key_id = access_key secret_access_key = secret_key -endpoint = zone = pek3a connection_retries = -------------------- y) Yes -this is OK e) Edit this remote d) Delete this remote y/e/d> y +Example for OneDrive Business: .IP .nf \f[C] -This remote is called \[ga]remote\[ga] and can now be used like this - -See all buckets - - rclone lsd remote: - -Make a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - -Sync \[ga]/home/local/directory\[ga] to the remote bucket, deleting any excess -files in the bucket. - - rclone sync --interactive /home/local/directory remote:bucket - -### --fast-list - -This remote supports \[ga]--fast-list\[ga] which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](https://rclone.org/docs/#fast-list) for more details. - -### Multipart uploads - -rclone supports multipart uploads with QingStor which means that it can -upload files bigger than 5 GiB. Note that files uploaded with multipart -upload don\[aq]t have an MD5SUM. - -Note that incomplete multipart uploads older than 24 hours can be -removed with \[ga]rclone cleanup remote:bucket\[ga] just for one bucket -\[ga]rclone cleanup remote:\[ga] for all buckets. QingStor does not ever -remove incomplete multipart uploads so it may be necessary to run this -from time to time. - -### Buckets and Zone - -With QingStor you can list buckets (\[ga]rclone lsd\[ga]) using any zone, -but you can only access the content of a bucket from the zone it was -created in. If you attempt to access a bucket from the wrong zone, -you will get an error, \[ga]incorrect zone, the bucket is not in \[aq]XXX\[aq] -zone\[ga]. - -### Authentication - -There are two ways to supply \[ga]rclone\[ga] with a set of QingStor -credentials. In order of precedence: - - - Directly in the rclone configuration file (as configured by \[ga]rclone config\[ga]) - - set \[ga]access_key_id\[ga] and \[ga]secret_access_key\[ga] - - Runtime configuration: - - set \[ga]env_auth\[ga] to \[ga]true\[ga] in the config file - - Exporting the following environment variables before running \[ga]rclone\[ga] - - Access Key ID: \[ga]QS_ACCESS_KEY_ID\[ga] or \[ga]QS_ACCESS_KEY\[ga] - - Secret Access Key: \[ga]QS_SECRET_ACCESS_KEY\[ga] or \[ga]QS_SECRET_KEY\[ga] - -### Restricted filename characters - -The control characters 0x00-0x1F and / are replaced as in the [default -restricted characters set](https://rclone.org/overview/#restricted-characters). Note -that 0x7F is not replaced. - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - -Here are the Standard options specific to qingstor (QingCloud Object Storage). - -#### --qingstor-env-auth - -Get QingStor credentials from runtime. - -Only applies if access_key_id and secret_access_key is blank. - -Properties: - -- Config: env_auth -- Env Var: RCLONE_QINGSTOR_ENV_AUTH -- Type: bool -- Default: false -- Examples: - - \[dq]false\[dq] - - Enter QingStor credentials in the next step. - - \[dq]true\[dq] - - Get QingStor credentials from the environment (env vars or IAM). - -#### --qingstor-access-key-id - -QingStor Access Key ID. - -Leave blank for anonymous access or runtime credentials. - -Properties: - -- Config: access_key_id -- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID -- Type: string -- Required: false - -#### --qingstor-secret-access-key - -QingStor Secret Access Key (password). - -Leave blank for anonymous access or runtime credentials. - -Properties: - -- Config: secret_access_key -- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY -- Type: string -- Required: false - -#### --qingstor-endpoint - -Enter an endpoint URL to connection QingStor API. - -Leave blank will use the default value \[dq]https://qingstor.com:443\[dq]. - -Properties: - -- Config: endpoint -- Env Var: RCLONE_QINGSTOR_ENDPOINT -- Type: string -- Required: false - -#### --qingstor-zone - -Zone to connect to. - -Default is \[dq]pek3a\[dq]. - -Properties: - -- Config: zone -- Env Var: RCLONE_QINGSTOR_ZONE -- Type: string -- Required: false -- Examples: - - \[dq]pek3a\[dq] - - The Beijing (China) Three Zone. - - Needs location constraint pek3a. - - \[dq]sh1a\[dq] - - The Shanghai (China) First Zone. - - Needs location constraint sh1a. - - \[dq]gd2a\[dq] - - The Guangdong (China) Second Zone. - - Needs location constraint gd2a. - -### Advanced options - -Here are the Advanced options specific to qingstor (QingCloud Object Storage). - -#### --qingstor-connection-retries - -Number of connection retries. - -Properties: - -- Config: connection_retries -- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES -- Type: int -- Default: 3 - -#### --qingstor-upload-cutoff - -Cutoff for switching to chunked upload. - -Any files larger than this will be uploaded in chunks of chunk_size. -The minimum is 0 and the maximum is 5 GiB. - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 200Mi - -#### --qingstor-chunk-size - -Chunk size to use for uploading. - -When uploading files larger than upload_cutoff they will be uploaded -as multipart uploads using this chunk size. - -Note that \[dq]--qingstor-upload-concurrency\[dq] chunks of this size are buffered -in memory per transfer. - -If you are transferring large files over high-speed links and you have -enough memory, then increasing this will speed up the transfers. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE -- Type: SizeSuffix -- Default: 4Mi - -#### --qingstor-upload-concurrency - -Concurrency for multipart uploads. - -This is the number of chunks of the same file that are uploaded -concurrently. - -NB if you set this to > 1 then the checksums of multipart uploads -become corrupted (the uploads themselves are not corrupted though). - -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. - -Properties: - -- Config: upload_concurrency -- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY -- Type: int -- Default: 1 - -#### --qingstor-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_QINGSTOR_ENCODING -- Type: Encoding -- Default: Slash,Ctl,InvalidUtf8 - - - -## Limitations - -\[ga]rclone about\[ga] is not supported by the qingstor backend. Backends without -this capability cannot determine free space for an rclone mount or -use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union -remote. - -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - -# Quatrix - -Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business). - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g., \[ga]remote:directory/subdirectory\[ga]. - -The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user\[aq]s profile at \[ga]https:///profile/api-keys\[ga] -or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. - -See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer - -## Configuration - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: +[ + { + \[dq]id\[dq]: \[dq]48d31887-5fad-4d73-a9f5-3c356e68a038\[dq], + \[dq]grantedToIdentities\[dq]: [ + { + \[dq]user\[dq]: { + \[dq]displayName\[dq]: \[dq]ryan\[at]contoso.com\[dq] + }, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + } + ], + \[dq]link\[dq]: { + \[dq]type\[dq]: \[dq]view\[dq], + \[dq]scope\[dq]: \[dq]users\[dq], + \[dq]webUrl\[dq]: \[dq]https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s\[dq] + }, + \[dq]roles\[dq]: [ + \[dq]read\[dq] + ], + \[dq]shareId\[dq]: \[dq]u!LKj1lkdlals90j1nlkascl\[dq] + }, + { + \[dq]id\[dq]: \[dq]5D33DD65C6932946\[dq], + \[dq]grantedTo\[dq]: { + \[dq]user\[dq]: { + \[dq]displayName\[dq]: \[dq]John Doe\[dq], + \[dq]id\[dq]: \[dq]efee1b77-fb3b-4f65-99d6-274c11914d12\[dq] + }, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + }, + \[dq]roles\[dq]: [ + \[dq]owner\[dq] + ], + \[dq]shareId\[dq]: \[dq]FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U\[dq] + } +] \f[R] .fi .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / -Quatrix by Maytech \ \[dq]quatrix\[dq] [snip] Storage> quatrix API key -for accessing Quatrix account. -api_key> your_api_key Host name of Quatrix account. -host> example.quatrix.it +To write permissions, pass in a \[dq]permissions\[dq] metadata key using +this same format. +The +\f[C]--metadata-mapper\f[R] (https://rclone.org/docs/#metadata-mapper) +tool can be very helpful for this. +.PP +When adding permissions, an email address can be provided in the +\f[C]User.ID\f[R] or \f[C]DisplayName\f[R] properties of +\f[C]grantedTo\f[R] or \f[C]grantedToIdentities\f[R]. +Alternatively, an ObjectID can be provided in \f[C]User.ID\f[R]. +At least one valid recipient must be provided in order to add a +permission for a user. +Creating a Public Link is also supported, if \f[C]Link.Scope\f[R] is set +to \f[C]\[dq]anonymous\[dq]\f[R]. +.PP +Example request to add a \[dq]read\[dq] permission: +.IP +.nf +\f[C] +[ + { + \[dq]id\[dq]: \[dq]\[dq], + \[dq]grantedTo\[dq]: { + \[dq]user\[dq]: {}, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + }, + \[dq]grantedToIdentities\[dq]: [ + { + \[dq]user\[dq]: { + \[dq]id\[dq]: \[dq]ryan\[at]contoso.com\[dq] + }, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + } + ], + \[dq]roles\[dq]: [ + \[dq]read\[dq] + ] + } +] +\f[R] +.fi +.PP +Note that adding a permission can fail if a conflicting permission +already exists for the file/folder. +.PP +To update an existing permission, include both the Permission ID and the +new \f[C]roles\f[R] to be assigned. +\f[C]roles\f[R] is the only property that can be changed. +.PP +To remove permissions, pass in a blob containing only the permissions +you wish to keep (which can be empty, to remove all.) +.PP +Note that both reading and writing permissions requires extra API calls, +so if you don\[aq]t need to read or write permissions it is recommended +to omit \f[C]--onedrive-metadata-permissions\f[R]. +.PP +Metadata and permissions are supported for Folders (directories) as well +as Files. +Note that setting the \f[C]mtime\f[R] or \f[C]btime\f[R] on a Folder +requires one extra API call on OneDrive Business only. +.PP +OneDrive does not currently support User Metadata. +When writing metadata, only writeable system properties will be written +-- any read-only or unrecognized keys passed in will be ignored. +.PP +TIP: to see the metadata and permissions for any file or folder, run: +.IP +.nf +\f[C] +rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read +\f[R] +.fi +.PP +Here are the possible system metadata items for the onedrive backend. .PP .TS tab(@); -lw(20.4n). +lw(11.1n) lw(11.1n) lw(11.1n) lw(16.6n) lw(20.3n). T{ -[remote] api_key = your_api_key host = example.quatrix.it +Name +T}@T{ +Help +T}@T{ +Type +T}@T{ +Example +T}@T{ +Read Only T} _ T{ -y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -\[ga]\[ga]\[ga] +btime +T}@T{ +Time of file birth (creation) with S accuracy (mS for OneDrive +Personal). +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05Z +T}@T{ +N T} T{ -Once configured you can then use \f[C]rclone\f[R] like this, +content-type +T}@T{ +The MIME type of the file. +T}@T{ +string +T}@T{ +text/plain +T}@T{ +\f[B]Y\f[R] T} T{ -List directories in top level of your Quatrix +created-by-display-name +T}@T{ +Display name of the user that created the item. +T}@T{ +string +T}@T{ +John Doe +T}@T{ +\f[B]Y\f[R] T} T{ +created-by-id +T}@T{ +ID of the user that created the item. +T}@T{ +string +T}@T{ +48d31887-5fad-4d73-a9f5-3c356e68a038 +T}@T{ +\f[B]Y\f[R] +T} +T{ +description +T}@T{ +A short description of the file. +Max 1024 characters. +Only supported for OneDrive Personal. +T}@T{ +string +T}@T{ +Contract for signing +T}@T{ +N +T} +T{ +id +T}@T{ +The unique identifier of the item within OneDrive. +T}@T{ +string +T}@T{ +01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K +T}@T{ +\f[B]Y\f[R] +T} +T{ +last-modified-by-display-name +T}@T{ +Display name of the user that last modified the item. +T}@T{ +string +T}@T{ +John Doe +T}@T{ +\f[B]Y\f[R] +T} +T{ +last-modified-by-id +T}@T{ +ID of the user that last modified the item. +T}@T{ +string +T}@T{ +48d31887-5fad-4d73-a9f5-3c356e68a038 +T}@T{ +\f[B]Y\f[R] +T} +T{ +malware-detected +T}@T{ +Whether OneDrive has detected that the item contains malware. +T}@T{ +boolean +T}@T{ +true +T}@T{ +\f[B]Y\f[R] +T} +T{ +mtime +T}@T{ +Time of last modification with S accuracy (mS for OneDrive Personal). +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05Z +T}@T{ +N +T} +T{ +package-type +T}@T{ +If present, indicates that this item is a package instead of a folder or +file. +Packages are treated like files in some contexts and folders in others. +T}@T{ +string +T}@T{ +oneNote +T}@T{ +\f[B]Y\f[R] +T} +T{ +permissions +T}@T{ +Permissions in a JSON dump of OneDrive format. +Enable with --onedrive-metadata-permissions. +Properties: id, grantedTo, grantedToIdentities, invitation, +inheritedFrom, link, roles, shareId +T}@T{ +JSON +T}@T{ +{} +T}@T{ +N +T} +T{ +shared-by-id +T}@T{ +ID of the user that shared the item (if shared). +T}@T{ +string +T}@T{ +48d31887-5fad-4d73-a9f5-3c356e68a038 +T}@T{ +\f[B]Y\f[R] +T} +T{ +shared-owner-id +T}@T{ +ID of the owner of the shared item (if shared). +T}@T{ +string +T}@T{ +48d31887-5fad-4d73-a9f5-3c356e68a038 +T}@T{ +\f[B]Y\f[R] +T} +T{ +shared-scope +T}@T{ +If shared, indicates the scope of how the item is shared: anonymous, +organization, or users. +T}@T{ +string +T}@T{ +users +T}@T{ +\f[B]Y\f[R] +T} +T{ +shared-time +T}@T{ +Time when the item was shared, with S accuracy (mS for OneDrive +Personal). +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05Z +T}@T{ +\f[B]Y\f[R] +T} +T{ +utime +T}@T{ +Time of upload with S accuracy (mS for OneDrive Personal). +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05Z +T}@T{ +\f[B]Y\f[R] +T} +.TE +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. +.SS Limitations +.PP +If you don\[aq]t use rclone for 90 days the refresh token will expire. +This will result in authorization problems. +This is easy to fix by running the +\f[C]rclone config reconnect remote:\f[R] command to get a new token and +refresh token. +.SS Naming +.PP +Note that OneDrive is case insensitive so you can\[aq]t have a file +called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. +.PP +There are quite a few characters that can\[aq]t be in OneDrive file +names. +These can\[aq]t occur on Windows platforms, but on non-Windows platforms +they are common. +Rclone will map these names to and from an identical looking unicode +equivalent. +For example if a file has a \f[C]?\f[R] in it will be mapped to +\f[C]\[uFF1F]\f[R] instead. +.SS File sizes +.PP +The largest allowed file size is 250 GiB for both OneDrive Personal and +OneDrive for Business (Updated 13 Jan +2021) (https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). +.SS Path length +.PP +The entire path, including the file name, must contain fewer than 400 +characters for OneDrive, OneDrive for Business and SharePoint Online. +If you are encrypting file and folder names with rclone, you may want to +pay attention to this limitation because the encrypted names are +typically longer than the original ones. +.SS Number of files +.PP +OneDrive seems to be OK with at least 50,000 files in a folder, but at +100,000 rclone will get errors listing the directory like +\f[C]couldn\[cq]t list files: UnknownError:\f[R]. +See #2707 (https://github.com/rclone/rclone/issues/2707) for more info. +.PP +An official document about the limitations for different types of +OneDrive can be found +here (https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). +.SS Versions +.PP +Every change in a file OneDrive causes the service to create a new +version of the file. +This counts against a users quota. +For example changing the modification time of a file creates a second +version, so the file apparently uses twice the space. +.PP +For example the \f[C]copy\f[R] command is affected by this as rclone +copies the file and then afterwards sets the modification time to match +the source file which uses another version. +.PP +You can use the \f[C]rclone cleanup\f[R] command (see below) to remove +all old versions. +.PP +Or you can set the \f[C]no_versions\f[R] parameter to \f[C]true\f[R] and +rclone will remove versions after operations which create new versions. +This takes extra transactions so only enable it if you need it. +.PP +\f[B]Note\f[R] At the time of writing Onedrive Personal creates versions +(but not for setting the modification time) but the API for removing +them returns \[dq]API not found\[dq] so cleanup and +\f[C]no_versions\f[R] should not be used on Onedrive Personal. +.SS Disabling versioning +.PP +Starting October 2018, users will no longer be able to disable +versioning by default. +This is because Microsoft has brought an +update (https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) +to the mechanism. +To change this new default setting, a PowerShell command is required to +be run by a SharePoint admin. +If you are an admin, you can run these commands in PowerShell to change +that setting: +.IP "1." 3 +\f[C]Install-Module -Name Microsoft.Online.SharePoint.PowerShell\f[R] +(in case you haven\[aq]t installed this already) +.IP "2." 3 +\f[C]Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking\f[R] +.IP "3." 3 +\f[C]Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU\[at]YOURSITE.COM\f[R] +(replacing \f[C]YOURSITE\f[R], \f[C]YOU\f[R], \f[C]YOURSITE.COM\f[R] +with the actual values; this will prompt for your credentials) +.IP "4." 3 +\f[C]Set-SPOTenant -EnableMinimumVersionRequirement $False\f[R] +.IP "5." 3 +\f[C]Disconnect-SPOService\f[R] (to disconnect from the server) +.PP +\f[I]Below are the steps for normal users to disable versioning. If you +don\[aq]t see the \[dq]No Versioning\[dq] option, make sure the above +requirements are met.\f[R] +.PP +User Weropol (https://github.com/Weropol) has found a method to disable +versioning on OneDrive +.IP "1." 3 +Open the settings menu by clicking on the gear symbol at the top of the +OneDrive Business page. +.IP "2." 3 +Click Site settings. +.IP "3." 3 +Once on the Site settings page, navigate to Site Administration > Site +libraries and lists. +.IP "4." 3 +Click Customize \[dq]Documents\[dq]. +.IP "5." 3 +Click General Settings > Versioning Settings. +.IP "6." 3 +Under Document Version History select the option No versioning. +Note: This will disable the creation of new file versions, but will not +remove any previous versions. +Your documents are safe. +.IP "7." 3 +Apply the changes by clicking OK. +.IP "8." 3 +Use rclone to upload or modify files. +(I also use the --no-update-modtime flag) +.IP "9." 3 +Restore the versioning settings after using rclone. +(Optional) +.SS Cleanup +.PP +OneDrive supports \f[C]rclone cleanup\f[R] which causes rclone to look +through every file under the path supplied and delete all version but +the current version. +Because this involves traversing all the files, then querying each file +for versions it can be quite slow. +Rclone does \f[C]--checkers\f[R] tests in parallel. +The command also supports \f[C]--interactive\f[R]/\f[C]i\f[R] or +\f[C]--dry-run\f[R] which is a great way to see what it would do. +.IP +.nf +\f[C] +rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir +rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir +\f[R] +.fi +.PP +\f[B]NB\f[R] Onedrive personal can\[aq]t currently delete versions +.SS Troubleshooting +.SS Excessive throttling or blocked on SharePoint +.PP +If you experience excessive throttling or is being blocked on SharePoint +then it may help to set the user agent explicitly with a flag like this: +\f[C]--user-agent \[dq]ISV|rclone.org|rclone/v1.55.1\[dq]\f[R] +.PP +The specific details can be found in the Microsoft document: Avoid +getting throttled or blocked in SharePoint +Online (https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) +.SS Unexpected file size/hash differences on Sharepoint +.PP +It is a +known (https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) +issue that Sharepoint (not OneDrive or OneDrive for Business) silently +modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), +causing file size and hash checks to fail. +There are also other situations that will cause OneDrive to report +inconsistent file sizes. +To use rclone with such affected files on Sharepoint, you may disable +these checks with the following command line arguments: +.IP +.nf +\f[C] +--ignore-checksum --ignore-size +\f[R] +.fi +.PP +Alternatively, if you have write access to the OneDrive files, it may be +possible to fix this problem for certain files, by attempting the steps +below. +Open the web interface for OneDrive (https://onedrive.live.com) and find +the affected files (which will be in the error messages/log for rclone). +Simply click on each of these files, causing OneDrive to open them on +the web. +This will cause each file to be converted in place to a format that is +functionally equivalent but which will no longer trigger the size +discrepancy. +Once all problematic files are converted you will no longer need the +ignore options above. +.SS Replacing/deleting existing files on Sharepoint gets \[dq]item not found\[dq] +.PP +It is a +known (https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue +that Sharepoint (not OneDrive or OneDrive for Business) may return +\[dq]item not found\[dq] errors when users try to replace or delete +uploaded files; this seems to mainly affect Office files (.docx, .xlsx, +etc.) and web files (.html, .aspx, etc.). +As a workaround, you may use the \f[C]--backup-dir \f[R] +command line argument so rclone moves the files to be replaced/deleted +into a given backup directory (instead of directly replacing/deleting +them). +For example, to instruct rclone to move the files into the directory +\f[C]rclone-backup-dir\f[R] on backend \f[C]mysharepoint\f[R], you may +use: +.IP +.nf +\f[C] +--backup-dir mysharepoint:rclone-backup-dir +\f[R] +.fi +.SS access_denied (AADSTS65005) +.IP +.nf +\f[C] +Error: access_denied +Code: AADSTS65005 +Description: Using application \[aq]rclone\[aq] is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. +\f[R] +.fi +.PP +This means that rclone can\[aq]t use the OneDrive for Business API with +your account. +You can\[aq]t do much about it, maybe write an email to your admins. +.PP +However, there are other ways to interact with your OneDrive account. +Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint +.SS invalid_grant (AADSTS50076) +.IP +.nf +\f[C] +Error: invalid_grant +Code: AADSTS50076 +Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access \[aq]...\[aq]. +\f[R] +.fi +.PP +If you see the error above after enabling multi-factor authentication +for your account, you can fix it by refreshing your OAuth refresh token. +To do that, run \f[C]rclone config\f[R], and choose to edit your +OneDrive backend. +Then, you don\[aq]t need to actually make any changes until you reach +this question: \f[C]Already have a token - refresh?\f[R]. +For this question, answer \f[C]y\f[R] and go through the process to +refresh your token, just like the first time the backend is configured. +After this, rclone should work again for this backend. +.SS Invalid request when making public links +.PP +On Sharepoint and OneDrive for Business, \f[C]rclone link\f[R] may +return an \[dq]Invalid request\[dq] error. +A possible cause is that the organisation admin didn\[aq]t allow public +links to be made for the organisation/sharepoint library. +To fix the permissions as an admin, take a look at the docs: +1 (https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off), +2 (https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3). +.SS Can not access \f[C]Shared\f[R] with me files +.PP +Shared with me files is not supported by rclone +currently (https://github.com/rclone/rclone/issues/4062), but there is a +workaround: +.IP "1." 3 +Visit https://onedrive.live.com (https://onedrive.live.com/) +.IP "2." 3 +Right click a item in \f[C]Shared\f[R], then click +\f[C]Add shortcut to My files\f[R] in the context +[IMAGE: make_shortcut (https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png)] +.IP "3." 3 +The shortcut will appear in \f[C]My files\f[R], you can access it with +rclone, it behaves like a normal folder/file. +[IMAGE: in_my_files (https://i.imgur.com/0S8H3li.png)] +[IMAGE: rclone_mount (https://i.imgur.com/2Iq66sW.png)] +.SS Live Photos uploaded from iOS (small video clips in .heic files) +.PP +The iOS OneDrive app introduced upload and +storage (https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) +of Live Photos (https://support.apple.com/en-gb/HT207310) in 2020. +The usage and download of these uploaded Live Photos is unfortunately +still work-in-progress and this introduces several issues when copying, +synchronising and mounting \[en] both in rclone and in the native +OneDrive client on Windows. +.PP +The root cause can easily be seen if you locate one of your Live Photos +in the OneDrive web interface. +Then download the photo from the web interface. +You will then see that the size of downloaded .heic file is smaller than +the size displayed in the web interface. +The downloaded file is smaller because it only contains a single frame +(still photo) extracted from the Live Photo (movie) stored in OneDrive. +.PP +The different sizes will cause \f[C]rclone copy/sync\f[R] to repeatedly +recopy unmodified photos something like this: +.IP +.nf +\f[C] +DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) +DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK +INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) +\f[R] +.fi +.PP +These recopies can be worked around by adding \f[C]--ignore-size\f[R]. +Please note that this workaround only syncs the still-picture not the +movie clip, and relies on modification dates being correctly updated on +all files in all situations. +.PP +The different sizes will also cause \f[C]rclone check\f[R] to report +size errors something like this: +.IP +.nf +\f[C] +ERROR : 20230203_123826234_iOS.heic: sizes differ +\f[R] +.fi +.PP +These check errors can be suppressed by adding \f[C]--ignore-size\f[R]. +.PP +The different sizes will also cause \f[C]rclone mount\f[R] to fail +downloading with an error something like this: +.IP +.nf +\f[C] +ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF +\f[R] +.fi +.PP +or like this when using \f[C]--cache-mode=full\f[R]: +.IP +.nf +\f[C] +INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +\f[R] +.fi +.SH OpenDrive +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n) New remote +d) Delete remote +q) Quit config +e/n/d/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / OpenDrive + \[rs] \[dq]opendrive\[dq] +[snip] +Storage> opendrive +Username +username> +Password +y) Yes type in my own password +g) Generate random password +y/g> y +Enter the password: +password: +Confirm the password: +password: +-------------------- +[remote] +username = +password = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +List directories in top level of your OpenDrive +.IP +.nf +\f[C] rclone lsd remote: -T} -T{ -List all the files in your Quatrix -T} -T{ +\f[R] +.fi +.PP +List all the files in your OpenDrive +.IP +.nf +\f[C] rclone ls remote: -T} -T{ -To copy a local directory to an Quatrix directory called backup -T} -T{ +\f[R] +.fi +.PP +To copy a local directory to an OpenDrive directory called backup +.IP +.nf +\f[C] rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP +OpenDrive allows modification times to be set on objects accurate to 1 +second. +These will be used to detect whether objects need syncing or not. +.PP +The MD5 hash algorithm is supported. +.SS Restricted filename characters +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +NUL +T}@T{ +0x00 +T}@T{ +\[u2400] T} T{ -### API key validity +/ +T}@T{ +0x2F +T}@T{ +\[uFF0F] T} T{ +\[dq] +T}@T{ +0x22 +T}@T{ +\[uFF02] +T} +T{ +* +T}@T{ +0x2A +T}@T{ +\[uFF0A] +T} +T{ +: +T}@T{ +0x3A +T}@T{ +\[uFF1A] +T} +T{ +< +T}@T{ +0x3C +T}@T{ +\[uFF1C] +T} +T{ +> +T}@T{ +0x3E +T}@T{ +\[uFF1E] +T} +T{ +? +T}@T{ +0x3F +T}@T{ +\[uFF1F] +T} +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +T{ +| +T}@T{ +0x7C +T}@T{ +\[uFF5C] +T} +.TE +.PP +File names can also not begin or end with the following characters. +These only get replaced if they are the first or last character in the +name: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +SP +T}@T{ +0x20 +T}@T{ +\[u2420] +T} +T{ +HT +T}@T{ +0x09 +T}@T{ +\[u2409] +T} +T{ +LF +T}@T{ +0x0A +T}@T{ +\[u240A] +T} +T{ +VT +T}@T{ +0x0B +T}@T{ +\[u240B] +T} +T{ +CR +T}@T{ +0x0D +T}@T{ +\[u240D] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options +.PP +Here are the Standard options specific to opendrive (OpenDrive). +.SS --opendrive-username +.PP +Username. +.PP +Properties: +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --opendrive-password +.PP +Password. +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS Advanced options +.PP +Here are the Advanced options specific to opendrive (OpenDrive). +.SS --opendrive-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: +Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot +.SS --opendrive-chunk-size +.PP +Files will be uploaded in chunks this size. +.PP +Note that these chunks are buffered in memory so increasing them will +increase memory use. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 10Mi +.SS --opendrive-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +Note that OpenDrive is case insensitive so you can\[aq]t have a file +called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. +.PP +There are quite a few characters that can\[aq]t be in OpenDrive file +names. +These can\[aq]t occur on Windows platforms, but on non-Windows platforms +they are common. +Rclone will map these names to and from an identical looking unicode +equivalent. +For example if a file has a \f[C]?\f[R] in it will be mapped to +\f[C]\[uFF1F]\f[R] instead. +.PP +\f[C]rclone about\f[R] is not supported by the OpenDrive backend. +Backends without this capability cannot determine free space for an +rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member +of an rclone union remote. +.PP +See List of backends that do not support rclone +about (https://rclone.org/overview/#optional-features) and rclone +about (https://rclone.org/commands/rclone_about/) +.SH Oracle Object Storage +.IP \[bu] 2 +Oracle Object Storage +Overview (https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) +.IP \[bu] 2 +Oracle Object Storage +FAQ (https://www.oracle.com/cloud/storage/object-storage/faq/) +.IP \[bu] 2 +Oracle Object Storage +Limits (https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) +.PP +Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for +the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. +\f[C]remote:bucket/path/to/dir\f[R]. +.PP +Sample command to transfer local artifacts to remote:bucket in oracle +object storage: +.PP +\f[C]rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv\f[R] +.SS Configuration +.PP +Here is an example of making an oracle object storage configuration. +\f[C]rclone config\f[R] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> n + +Enter name for new remote. +name> remote + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Oracle Cloud Infrastructure Object Storage + \[rs] (oracleobjectstorage) +Storage> oracleobjectstorage + +Option provider. +Choose your Auth Provider +Choose a number from below, or type in your own string value. +Press Enter for the default (env_auth). + 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins + \[rs] (env_auth) + / use an OCI user and an API key for authentication. + 2 | you\[cq]ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. + | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + \[rs] (user_principal_auth) + / use instance principals to authorize an instance to make API calls. + 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. + | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + \[rs] (instance_principal_auth) + / use workload identity to grant Kubernetes pods policy-driven access to Oracle Cloud + 4 | Infrastructure (OCI) resources using OCI Identity and Access Management (IAM). + | https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm + \[rs] (workload_identity_auth) + 5 / use resource principals to make API calls + \[rs] (resource_principal_auth) + 6 / no credentials needed, this is typically for reading public buckets + \[rs] (no_auth) +provider> 2 + +Option namespace. +Object storage namespace +Enter a value. +namespace> idbamagbg734 + +Option compartment. +Object storage compartment OCID +Enter a value. +compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba + +Option region. +Object storage Region +Enter a value. +region> us-ashburn-1 + +Option endpoint. +Endpoint for Object storage API. +Leave blank to use the default endpoint for the region. +Enter a value. Press Enter to leave empty. +endpoint> + +Option config_file. +Full Path to OCI config file +Choose a number from below, or type in your own string value. +Press Enter for the default (\[ti]/.oci/config). + 1 / oci configuration file location + \[rs] (\[ti]/.oci/config) +config_file> /etc/oci/dev.conf + +Option config_profile. +Profile name inside OCI config file +Choose a number from below, or type in your own string value. +Press Enter for the default (Default). + 1 / Use the default profile + \[rs] (Default) +config_profile> Test + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: oracleobjectstorage +- namespace: idbamagbg734 +- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba +- region: us-ashburn-1 +- provider: user_principal_auth +- config_file: /etc/oci/dev.conf +- config_profile: Test +Keep this \[dq]remote\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +See all buckets +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +Create a new bucket +.IP +.nf +\f[C] +rclone mkdir remote:bucket +\f[R] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone ls remote:bucket +rclone ls remote:bucket --max-depth 1 +\f[R] +.fi +.SS Authentication Providers +.PP +OCI has various authentication methods. +To learn more about authentication methods please refer oci +authentication +methods (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) +These choices can be specified in the rclone config file. +.PP +Rclone supports the following OCI authentication provider. +.IP +.nf +\f[C] +User Principal +Instance Principal +Resource Principal +Workload Identity +No authentication +\f[R] +.fi +.SS User Principal +.PP +Sample rclone config file for Authentication Provider User Principal: +.IP +.nf +\f[C] +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = user_principal_auth +config_file = /home/opc/.oci/config +config_profile = Default +\f[R] +.fi +.PP +Advantages: - One can use this method from any server within OCI or +on-premises or from other cloud provider. +.PP +Considerations: - you need to configure user\[cq]s privileges / policy +to allow access to object storage - Overhead of managing users and keys. +- If the user is deleted, the config file will no longer work and may +cause automation regressions that use the user\[aq]s credentials. +.SS Instance Principal +.PP +An OCI compute instance can be authorized to use rclone by using +it\[aq]s identity and certificates as an instance principal. +With this approach no credentials have to be stored and managed. +.PP +Sample rclone configuration file for Authentication Provider Instance +Principal: +.IP +.nf +\f[C] +[opc\[at]rclone \[ti]]$ cat \[ti]/.config/rclone/rclone.conf +[oos] +type = oracleobjectstorage +namespace = idfn +compartment = ocid1.compartment.oc1..aak7a +region = us-ashburn-1 +provider = instance_principal_auth +\f[R] +.fi +.PP +Advantages: +.IP \[bu] 2 +With instance principals, you don\[aq]t need to configure user +credentials and transfer/ save it to disk in your compute instances or +rotate the credentials. +.IP \[bu] 2 +You don\[cq]t need to deal with users and keys. +.IP \[bu] 2 +Greatly helps in automation as you don\[aq]t have to manage access keys, +user private keys, storing them in vault, using kms etc. +.PP +Considerations: +.IP \[bu] 2 +You need to configure a dynamic group having this instance as member and +add policy to read object storage to that dynamic group. +.IP \[bu] 2 +Everyone who has access to this machine can execute the CLI commands. +.IP \[bu] 2 +It is applicable for oci compute instances only. +It cannot be used on external instance or resources. +.SS Resource Principal +.PP +Resource principal auth is very similar to instance principal auth but +used for resources that are not compute instances such as serverless +functions (https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). +To use resource principal ensure Rclone process is started with these +environment variables set in its process. +.IP +.nf +\f[C] +export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem +export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token +\f[R] +.fi +.PP +Sample rclone configuration file for Authentication Provider Resource +Principal: +.IP +.nf +\f[C] +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = resource_principal_auth +\f[R] +.fi +.SS Workload Identity +.PP +Workload Identity auth may be used when running Rclone from Kubernetes +pod on a Container Engine for Kubernetes (OKE) cluster. +For more details on configuring Workload Identity, see Granting +Workloads Access to OCI +Resources (https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm). +To use workload identity, ensure Rclone is started with these +environment variables set in its process. +.IP +.nf +\f[C] +export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +\f[R] +.fi +.SS No authentication +.PP +Public buckets do not require any authentication mechanism to read +objects. +Sample rclone configuration file for No authentication: +.IP +.nf +\f[C] +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = no_auth +\f[R] +.fi +.SS Modification times and hashes +.PP +The modification time is stored as metadata on the object as +\f[C]opc-meta-mtime\f[R] as floating point since the epoch, accurate to +1 ns. +.PP +If the modification time needs to be updated rclone will attempt to +perform a server side copy to update the modification if the object can +be copied in a single part. +In the case the object is larger than 5Gb, the object will be uploaded +rather than copied. +.PP +Note that reading this from the object takes an additional +\f[C]HEAD\f[R] request as the metadata isn\[aq]t returned in object +listings. +.PP +The MD5 hash algorithm is supported. +.SS Multipart uploads +.PP +rclone supports multipart uploads with OOS which means that it can +upload files bigger than 5 GiB. +.PP +Note that files uploaded \f[I]both\f[R] with multipart upload +\f[I]and\f[R] through crypt remotes do not have MD5 sums. +.PP +rclone switches from single part uploads to multipart uploads at the +point specified by \f[C]--oos-upload-cutoff\f[R]. +This can be a maximum of 5 GiB and a minimum of 0 (ie always upload +multipart files). +.PP +The chunk sizes used in the multipart upload are specified by +\f[C]--oos-chunk-size\f[R] and the number of chunks uploaded +concurrently is specified by \f[C]--oos-upload-concurrency\f[R]. +.PP +Multipart uploads will use \f[C]--transfers\f[R] * +\f[C]--oos-upload-concurrency\f[R] * \f[C]--oos-chunk-size\f[R] extra +memory. +Single part uploads to not use extra memory. +.PP +Single part transfers can be faster than multipart transfers or slower +depending on your latency from oos - the more latency, the more likely +single part transfers will be faster. +.PP +Increasing \f[C]--oos-upload-concurrency\f[R] will increase throughput +(8 would be a sensible value) and increasing \f[C]--oos-chunk-size\f[R] +also increases throughput (16M would be sensible). +Increasing either of these will use more memory. +The default values are high enough to gain most of the possible +performance without using too much memory. +.SS Standard options +.PP +Here are the Standard options specific to oracleobjectstorage (Oracle +Cloud Infrastructure Object Storage). +.SS --oos-provider +.PP +Choose your Auth Provider +.PP +Properties: +.IP \[bu] 2 +Config: provider +.IP \[bu] 2 +Env Var: RCLONE_OOS_PROVIDER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]env_auth\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]env_auth\[dq] +.RS 2 +.IP \[bu] 2 +automatically pickup the credentials from runtime(env), first one to +provide auth wins +.RE +.IP \[bu] 2 +\[dq]user_principal_auth\[dq] +.RS 2 +.IP \[bu] 2 +use an OCI user and an API key for authentication. +.IP \[bu] 2 +you\[cq]ll need to put in a config file your tenancy OCID, user OCID, +region, the path, fingerprint to an API key. +.IP \[bu] 2 +https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm +.RE +.IP \[bu] 2 +\[dq]instance_principal_auth\[dq] +.RS 2 +.IP \[bu] 2 +use instance principals to authorize an instance to make API calls. +.IP \[bu] 2 +each instance has its own identity, and authenticates using the +certificates that are read from instance metadata. +.IP \[bu] 2 +https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm +.RE +.IP \[bu] 2 +\[dq]workload_identity_auth\[dq] +.RS 2 +.IP \[bu] 2 +use workload identity to grant OCI Container Engine for Kubernetes +workloads policy-driven access to OCI resources using OCI Identity and +Access Management (IAM). +.IP \[bu] 2 +https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm +.RE +.IP \[bu] 2 +\[dq]resource_principal_auth\[dq] +.RS 2 +.IP \[bu] 2 +use resource principals to make API calls +.RE +.IP \[bu] 2 +\[dq]no_auth\[dq] +.RS 2 +.IP \[bu] 2 +no credentials needed, this is typically for reading public buckets +.RE +.RE +.SS --oos-namespace +.PP +Object storage namespace +.PP +Properties: +.IP \[bu] 2 +Config: namespace +.IP \[bu] 2 +Env Var: RCLONE_OOS_NAMESPACE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --oos-compartment +.PP +Object storage compartment OCID +.PP +Properties: +.IP \[bu] 2 +Config: compartment +.IP \[bu] 2 +Env Var: RCLONE_OOS_COMPARTMENT +.IP \[bu] 2 +Provider: !no_auth +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --oos-region +.PP +Object storage Region +.PP +Properties: +.IP \[bu] 2 +Config: region +.IP \[bu] 2 +Env Var: RCLONE_OOS_REGION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --oos-endpoint +.PP +Endpoint for Object storage API. +.PP +Leave blank to use the default endpoint for the region. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_OOS_ENDPOINT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --oos-config-file +.PP +Path to OCI config file +.PP +Properties: +.IP \[bu] 2 +Config: config_file +.IP \[bu] 2 +Env Var: RCLONE_OOS_CONFIG_FILE +.IP \[bu] 2 +Provider: user_principal_auth +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]\[ti]/.oci/config\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[ti]/.oci/config\[dq] +.RS 2 +.IP \[bu] 2 +oci configuration file location +.RE +.RE +.SS --oos-config-profile +.PP +Profile name inside the oci config file +.PP +Properties: +.IP \[bu] 2 +Config: config_profile +.IP \[bu] 2 +Env Var: RCLONE_OOS_CONFIG_PROFILE +.IP \[bu] 2 +Provider: user_principal_auth +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]Default\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]Default\[dq] +.RS 2 +.IP \[bu] 2 +Use the default profile +.RE +.RE +.SS Advanced options +.PP +Here are the Advanced options specific to oracleobjectstorage (Oracle +Cloud Infrastructure Object Storage). +.SS --oos-storage-tier +.PP +The storage class to use when storing new objects in storage. +https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm +.PP +Properties: +.IP \[bu] 2 +Config: storage_tier +.IP \[bu] 2 +Env Var: RCLONE_OOS_STORAGE_TIER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]Standard\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]Standard\[dq] +.RS 2 +.IP \[bu] 2 +Standard storage tier, this is the default tier +.RE +.IP \[bu] 2 +\[dq]InfrequentAccess\[dq] +.RS 2 +.IP \[bu] 2 +InfrequentAccess storage tier +.RE +.IP \[bu] 2 +\[dq]Archive\[dq] +.RS 2 +.IP \[bu] 2 +Archive storage tier +.RE +.RE +.SS --oos-upload-cutoff +.PP +Cutoff for switching to chunked upload. +.PP +Any files larger than this will be uploaded in chunks of chunk_size. +The minimum is 0 and the maximum is 5 GiB. +.PP +Properties: +.IP \[bu] 2 +Config: upload_cutoff +.IP \[bu] 2 +Env Var: RCLONE_OOS_UPLOAD_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 200Mi +.SS --oos-chunk-size +.PP +Chunk size to use for uploading. +.PP +When uploading files larger than upload_cutoff or files with unknown +size (e.g. +from \[dq]rclone rcat\[dq] or uploaded with \[dq]rclone mount\[dq] they +will be uploaded as multipart uploads using this chunk size. +.PP +Note that \[dq]upload_concurrency\[dq] chunks of this size are buffered +in memory per transfer. +.PP +If you are transferring large files over high-speed links and you have +enough memory, then increasing this will speed up the transfers. +.PP +Rclone will automatically increase the chunk size when uploading a large +file of known size to stay below the 10,000 chunks limit. +.PP +Files of unknown size are uploaded with the configured chunk_size. +Since the default chunk size is 5 MiB and there can be at most 10,000 +chunks, this means that by default the maximum size of a file you can +stream upload is 48 GiB. +If you wish to stream upload larger files then you will need to increase +chunk_size. +.PP +Increasing the chunk size decreases the accuracy of the progress +statistics displayed with \[dq]-P\[dq] flag. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_OOS_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 5Mi +.SS --oos-max-upload-parts +.PP +Maximum number of parts in a multipart upload. +.PP +This option defines the maximum number of multipart chunks to use when +doing a multipart upload. +.PP +OCI has max parts limit of 10,000 chunks. +.PP +Rclone will automatically increase the chunk size when uploading a large +file of a known size to stay below this number of chunks limit. +.PP +Properties: +.IP \[bu] 2 +Config: max_upload_parts +.IP \[bu] 2 +Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 10000 +.SS --oos-upload-concurrency +.PP +Concurrency for multipart uploads. +.PP +This is the number of chunks of the same file that are uploaded +concurrently. +.PP +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. +.PP +Properties: +.IP \[bu] 2 +Config: upload_concurrency +.IP \[bu] 2 +Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 10 +.SS --oos-copy-cutoff +.PP +Cutoff for switching to multipart copy. +.PP +Any files larger than this that need to be server-side copied will be +copied in chunks of this size. +.PP +The minimum is 0 and the maximum is 5 GiB. +.PP +Properties: +.IP \[bu] 2 +Config: copy_cutoff +.IP \[bu] 2 +Env Var: RCLONE_OOS_COPY_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 4.656Gi +.SS --oos-copy-timeout +.PP +Timeout for copy. +.PP +Copy is an asynchronous operation, specify timeout to wait for copy to +succeed +.PP +Properties: +.IP \[bu] 2 +Config: copy_timeout +.IP \[bu] 2 +Env Var: RCLONE_OOS_COPY_TIMEOUT +.IP \[bu] 2 +Type: Duration +.IP \[bu] 2 +Default: 1m0s +.SS --oos-disable-checksum +.PP +Don\[aq]t store MD5 checksum with object metadata. +.PP +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can add it to metadata on the object. +This is great for data integrity checking but can cause long delays for +large files to start uploading. +.PP +Properties: +.IP \[bu] 2 +Config: disable_checksum +.IP \[bu] 2 +Env Var: RCLONE_OOS_DISABLE_CHECKSUM +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --oos-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_OOS_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,InvalidUtf8,Dot +.SS --oos-leave-parts-on-error +.PP +If true avoid calling abort upload on a failure, leaving all +successfully uploaded parts for manual recovery. +.PP +It should be set to true for resuming uploads across different sessions. +.PP +WARNING: Storing parts of an incomplete multipart upload counts towards +space usage on object storage and will add additional costs if not +cleaned up. +.PP +Properties: +.IP \[bu] 2 +Config: leave_parts_on_error +.IP \[bu] 2 +Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --oos-attempt-resume-upload +.PP +If true attempt to resume previously started multipart upload for the +object. +This will be helpful to speed up multipart transfers by resuming uploads +from past session. +.PP +WARNING: If chunk size differs in resumed session from past incomplete +session, then the resumed multipart upload is aborted and a new +multipart upload is started with the new chunk size. +.PP +The flag leave_parts_on_error must be true to resume and optimize to +skip parts that were already uploaded successfully. +.PP +Properties: +.IP \[bu] 2 +Config: attempt_resume_upload +.IP \[bu] 2 +Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --oos-no-check-bucket +.PP +If set, don\[aq]t attempt to check the bucket exists or create it. +.PP +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. +.PP +It can also be needed if the user you are using does not have bucket +creation permissions. +.PP +Properties: +.IP \[bu] 2 +Config: no_check_bucket +.IP \[bu] 2 +Env Var: RCLONE_OOS_NO_CHECK_BUCKET +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --oos-sse-customer-key-file +.PP +To use SSE-C, a file containing the base64-encoded string of the AES-256 +encryption key associated with the object. +Please note only one of +sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.\[aq] +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key_file +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --oos-sse-customer-key +.PP +To use SSE-C, the optional header that specifies the base64-encoded +256-bit encryption key to use to encrypt or decrypt the data. +Please note only one of +sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. +For more information, see Using Your Own Keys for Server-Side Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --oos-sse-customer-key-sha256 +.PP +If using SSE-C, The optional header that specifies the base64-encoded +SHA256 hash of the encryption key. +This value is used to check the integrity of the encryption key. +see Using Your Own Keys for Server-Side Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key_sha256 +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --oos-sse-kms-key-id +.PP +if using your own master key in vault, this header specifies the OCID +(https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) +of a master encryption key used to call the Key Management service to +generate a data encryption key or to encrypt or decrypt a data +encryption key. +Please note only one of +sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. +.PP +Properties: +.IP \[bu] 2 +Config: sse_kms_key_id +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_KMS_KEY_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --oos-sse-customer-algorithm +.PP +If using SSE-C, the optional header that specifies \[dq]AES256\[dq] as +the encryption algorithm. +Object Storage supports \[dq]AES256\[dq] as the encryption algorithm. +For more information, see Using Your Own Keys for Server-Side Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_algorithm +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.IP \[bu] 2 +\[dq]AES256\[dq] +.RS 2 +.IP \[bu] 2 +AES256 +.RE +.RE +.SS --oos-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_OOS_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Backend commands +.PP +Here are the commands specific to the oracleobjectstorage backend. +.PP +Run them with +.IP +.nf +\f[C] +rclone backend COMMAND remote: +\f[R] +.fi +.PP +The help below will explain what arguments each command takes. +.PP +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. +.PP +These can be run on a running backend using the rc command +backend/command (https://rclone.org/rc/#backend-command). +.SS rename +.PP +change the name of an object +.IP +.nf +\f[C] +rclone backend rename remote: [options] [+] +\f[R] +.fi +.PP +This command can be used to rename a object. +.PP +Usage Examples: +.IP +.nf +\f[C] +rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name +\f[R] +.fi +.SS list-multipart-uploads +.PP +List the unfinished multipart uploads +.IP +.nf +\f[C] +rclone backend list-multipart-uploads remote: [options] [+] +\f[R] +.fi +.PP +This command lists the unfinished multipart uploads in JSON format. +.IP +.nf +\f[C] +rclone backend list-multipart-uploads oos:bucket/path/to/object +\f[R] +.fi +.PP +It returns a dictionary of buckets with values as lists of unfinished +multipart uploads. +.PP +You can call it with no bucket in which case it lists all bucket, with a +bucket or with a bucket and path. +.IP +.nf +\f[C] +{ + \[dq]test-bucket\[dq]: [ + { + \[dq]namespace\[dq]: \[dq]test-namespace\[dq], + \[dq]bucket\[dq]: \[dq]test-bucket\[dq], + \[dq]object\[dq]: \[dq]600m.bin\[dq], + \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq], + \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq], + \[dq]storageTier\[dq]: \[dq]Standard\[dq] + } + ] +\f[R] +.fi +.SS cleanup +.PP +Remove unfinished multipart uploads. +.IP +.nf +\f[C] +rclone backend cleanup remote: [options] [+] +\f[R] +.fi +.PP +This command removes unfinished multipart uploads of age greater than +max-age which defaults to 24 hours. +.PP +Note that you can use --interactive/-i or --dry-run with this command to +see what it would do. +.IP +.nf +\f[C] +rclone backend cleanup oos:bucket/path/to/object +rclone backend cleanup -o max-age=7w oos:bucket/path/to/object +\f[R] +.fi +.PP +Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. +.PP +Options: +.IP \[bu] 2 +\[dq]max-age\[dq]: Max age of upload to delete +.SS restore +.PP +Restore objects from Archive to Standard storage +.IP +.nf +\f[C] +rclone backend restore remote: [options] [+] +\f[R] +.fi +.PP +This command can be used to restore one or more objects from Archive to +Standard storage. +.IP +.nf +\f[C] +Usage Examples: + +rclone backend restore oos:bucket/path/to/directory -o hours=HOURS +rclone backend restore oos:bucket -o hours=HOURS +\f[R] +.fi +.PP +This flag also obeys the filters. +Test first with --interactive/-i or --dry-run flags +.IP +.nf +\f[C] +rclone --interactive backend restore --include \[dq]*.txt\[dq] oos:bucket/path -o hours=72 +\f[R] +.fi +.PP +All the objects shown will be marked for restore, then +.IP +.nf +\f[C] +rclone backend restore --include \[dq]*.txt\[dq] oos:bucket/path -o hours=72 + +It returns a list of status dictionaries with Object Name and Status +keys. The Status will be \[dq]RESTORED\[dq]\[dq] if it was successful or an error message +if not. + +[ + { + \[dq]Object\[dq]: \[dq]test.txt\[dq] + \[dq]Status\[dq]: \[dq]RESTORED\[dq], + }, + { + \[dq]Object\[dq]: \[dq]test/file4.txt\[dq] + \[dq]Status\[dq]: \[dq]RESTORED\[dq], + } +] +\f[R] +.fi +.PP +Options: +.IP \[bu] 2 +\[dq]hours\[dq]: The number of hours for which this object will be +restored. +Default is 24 hrs. +.SS Tutorials +.SS Mounting Buckets (https://rclone.org/oracleobjectstorage/tutorial_mount/) +.SH QingStor +.PP +Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for +the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. +\f[C]remote:bucket/path/to/dir\f[R]. +.SS Configuration +.PP +Here is an example of making an QingStor configuration. +First run +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +n/r/c/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / QingStor Object Storage + \[rs] \[dq]qingstor\[dq] +[snip] +Storage> qingstor +Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own value + 1 / Enter QingStor credentials in the next step + \[rs] \[dq]false\[dq] + 2 / Get QingStor credentials from the environment (env vars or IAM) + \[rs] \[dq]true\[dq] +env_auth> 1 +QingStor Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> access_key +QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> secret_key +Enter an endpoint URL to connection QingStor API. +Leave blank will use the default value \[dq]https://qingstor.com:443\[dq] +endpoint> +Zone connect to. Default is \[dq]pek3a\[dq]. +Choose a number from below, or type in your own value + / The Beijing (China) Three Zone + 1 | Needs location constraint pek3a. + \[rs] \[dq]pek3a\[dq] + / The Shanghai (China) First Zone + 2 | Needs location constraint sh1a. + \[rs] \[dq]sh1a\[dq] +zone> 1 +Number of connection retry. +Leave blank will use the default value \[dq]3\[dq]. +connection_retries> +Remote config +-------------------- +[remote] +env_auth = false +access_key_id = access_key +secret_access_key = secret_key +endpoint = +zone = pek3a +connection_retries = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This remote is called \f[C]remote\f[R] and can now be used like this +.PP +See all buckets +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +Make a new bucket +.IP +.nf +\f[C] +rclone mkdir remote:bucket +\f[R] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone ls remote:bucket +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote bucket, deleting any +excess files in the bucket. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory remote:bucket +\f[R] +.fi +.SS --fast-list +.PP +This remote supports \f[C]--fast-list\f[R] which allows you to use fewer +transactions in exchange for more memory. +See the rclone docs (https://rclone.org/docs/#fast-list) for more +details. +.SS Multipart uploads +.PP +rclone supports multipart uploads with QingStor which means that it can +upload files bigger than 5 GiB. +Note that files uploaded with multipart upload don\[aq]t have an MD5SUM. +.PP +Note that incomplete multipart uploads older than 24 hours can be +removed with \f[C]rclone cleanup remote:bucket\f[R] just for one bucket +\f[C]rclone cleanup remote:\f[R] for all buckets. +QingStor does not ever remove incomplete multipart uploads so it may be +necessary to run this from time to time. +.SS Buckets and Zone +.PP +With QingStor you can list buckets (\f[C]rclone lsd\f[R]) using any +zone, but you can only access the content of a bucket from the zone it +was created in. +If you attempt to access a bucket from the wrong zone, you will get an +error, +\f[C]incorrect zone, the bucket is not in \[aq]XXX\[aq] zone\f[R]. +.SS Authentication +.PP +There are two ways to supply \f[C]rclone\f[R] with a set of QingStor +credentials. +In order of precedence: +.IP \[bu] 2 +Directly in the rclone configuration file (as configured by +\f[C]rclone config\f[R]) +.RS 2 +.IP \[bu] 2 +set \f[C]access_key_id\f[R] and \f[C]secret_access_key\f[R] +.RE +.IP \[bu] 2 +Runtime configuration: +.RS 2 +.IP \[bu] 2 +set \f[C]env_auth\f[R] to \f[C]true\f[R] in the config file +.IP \[bu] 2 +Exporting the following environment variables before running +\f[C]rclone\f[R] +.RS 2 +.IP \[bu] 2 +Access Key ID: \f[C]QS_ACCESS_KEY_ID\f[R] or \f[C]QS_ACCESS_KEY\f[R] +.IP \[bu] 2 +Secret Access Key: \f[C]QS_SECRET_ACCESS_KEY\f[R] or +\f[C]QS_SECRET_KEY\f[R] +.RE +.RE +.SS Restricted filename characters +.PP +The control characters 0x00-0x1F and / are replaced as in the default +restricted characters +set (https://rclone.org/overview/#restricted-characters). +Note that 0x7F is not replaced. +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options +.PP +Here are the Standard options specific to qingstor (QingCloud Object +Storage). +.SS --qingstor-env-auth +.PP +Get QingStor credentials from runtime. +.PP +Only applies if access_key_id and secret_access_key is blank. +.PP +Properties: +.IP \[bu] 2 +Config: env_auth +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ENV_AUTH +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]false\[dq] +.RS 2 +.IP \[bu] 2 +Enter QingStor credentials in the next step. +.RE +.IP \[bu] 2 +\[dq]true\[dq] +.RS 2 +.IP \[bu] 2 +Get QingStor credentials from the environment (env vars or IAM). +.RE +.RE +.SS --qingstor-access-key-id +.PP +QingStor Access Key ID. +.PP +Leave blank for anonymous access or runtime credentials. +.PP +Properties: +.IP \[bu] 2 +Config: access_key_id +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --qingstor-secret-access-key +.PP +QingStor Secret Access Key (password). +.PP +Leave blank for anonymous access or runtime credentials. +.PP +Properties: +.IP \[bu] 2 +Config: secret_access_key +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --qingstor-endpoint +.PP +Enter an endpoint URL to connection QingStor API. +.PP +Leave blank will use the default value +\[dq]https://qingstor.com:443\[dq]. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ENDPOINT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --qingstor-zone +.PP +Zone to connect to. +.PP +Default is \[dq]pek3a\[dq]. +.PP +Properties: +.IP \[bu] 2 +Config: zone +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ZONE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]pek3a\[dq] +.RS 2 +.IP \[bu] 2 +The Beijing (China) Three Zone. +.IP \[bu] 2 +Needs location constraint pek3a. +.RE +.IP \[bu] 2 +\[dq]sh1a\[dq] +.RS 2 +.IP \[bu] 2 +The Shanghai (China) First Zone. +.IP \[bu] 2 +Needs location constraint sh1a. +.RE +.IP \[bu] 2 +\[dq]gd2a\[dq] +.RS 2 +.IP \[bu] 2 +The Guangdong (China) Second Zone. +.IP \[bu] 2 +Needs location constraint gd2a. +.RE +.RE +.SS Advanced options +.PP +Here are the Advanced options specific to qingstor (QingCloud Object +Storage). +.SS --qingstor-connection-retries +.PP +Number of connection retries. +.PP +Properties: +.IP \[bu] 2 +Config: connection_retries +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 3 +.SS --qingstor-upload-cutoff +.PP +Cutoff for switching to chunked upload. +.PP +Any files larger than this will be uploaded in chunks of chunk_size. +The minimum is 0 and the maximum is 5 GiB. +.PP +Properties: +.IP \[bu] 2 +Config: upload_cutoff +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 200Mi +.SS --qingstor-chunk-size +.PP +Chunk size to use for uploading. +.PP +When uploading files larger than upload_cutoff they will be uploaded as +multipart uploads using this chunk size. +.PP +Note that \[dq]--qingstor-upload-concurrency\[dq] chunks of this size +are buffered in memory per transfer. +.PP +If you are transferring large files over high-speed links and you have +enough memory, then increasing this will speed up the transfers. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 4Mi +.SS --qingstor-upload-concurrency +.PP +Concurrency for multipart uploads. +.PP +This is the number of chunks of the same file that are uploaded +concurrently. +.PP +NB if you set this to > 1 then the checksums of multipart uploads become +corrupted (the uploads themselves are not corrupted though). +.PP +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. +.PP +Properties: +.IP \[bu] 2 +Config: upload_concurrency +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 1 +.SS --qingstor-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,Ctl,InvalidUtf8 +.SS --qingstor-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +\f[C]rclone about\f[R] is not supported by the qingstor backend. +Backends without this capability cannot determine free space for an +rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member +of an rclone union remote. +.PP +See List of backends that do not support rclone +about (https://rclone.org/overview/#optional-features) and rclone +about (https://rclone.org/commands/rclone_about/) +.SH Quatrix +.PP +Quatrix by Maytech is Quatrix Secure Compliant File Sharing | +Maytech (https://www.maytech.net/products/quatrix-business). +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g., +\f[C]remote:directory/subdirectory\f[R]. +.PP +The initial setup for Quatrix involves getting an API Key from Quatrix. +You can get the API key in the user\[aq]s profile at +\f[C]https:///profile/api-keys\f[R] or with the help of the API +- +https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. +.PP +See complete Swagger documentation for Quatrix - +https://docs.maytech.net/quatrix/quatrix-api/api-explorer +.SS Configuration +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Quatrix by Maytech + \[rs] \[dq]quatrix\[dq] +[snip] +Storage> quatrix +API key for accessing Quatrix account. +api_key> your_api_key +Host name of Quatrix account. +host> example.quatrix.it + +-------------------- +[remote] +api_key = your_api_key +host = example.quatrix.it +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP +List directories in top level of your Quatrix +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your Quatrix +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to an Quatrix directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS API key validity +.PP API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can update it in rclone config. The same happens if the hostname was changed. -T} -T{ -\[ga]\[ga]\[ga] $ rclone config Current remotes: -T} -T{ -Name Type ==== ==== remote quatrix -T} -T{ -e) Edit existing remote n) New remote d) Delete remote r) Rename remote -c) Copy remote s) Set configuration password q) Quit config -e/n/d/r/c/s/q> e Choose a number from below, or type in an existing -value 1 > remote remote> remote -T} -.TE +.IP +.nf +\f[C] +$ rclone config +Current remotes: + +Name Type +==== ==== +remote quatrix + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> e +Choose a number from below, or type in an existing value + 1 > remote +remote> remote +-------------------- +[remote] +type = quatrix +host = some_host.quatrix.it +api_key = your_api_key +-------------------- +Edit remote +Option api_key. +API key for accessing Quatrix account +Enter a string value. Press Enter for the default (your_api_key) +api_key> +Option host. +Host name of Quatrix account +Enter a string value. Press Enter for the default (some_host.quatrix.it). + +-------------------- +[remote] +type = quatrix +host = some_host.quatrix.it +api_key = your_api_key +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.SS Modification times and hashes .PP -[remote] type = quatrix host = some_host.quatrix.it api_key = -your_api_key -------------------- Edit remote Option api_key. -API key for accessing Quatrix account Enter a string value. -Press Enter for the default (your_api_key) api_key> Option host. -Host name of Quatrix account Enter a string value. -Press Enter for the default (some_host.quatrix.it). -.PP -.TS -tab(@); -lw(20.4n). -T{ -[remote] type = quatrix host = some_host.quatrix.it api_key = -your_api_key -T} -_ -T{ -y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -\[ga]\[ga]\[ga] -T} -T{ -### Modification times and hashes -T} -T{ Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not. -T} -T{ +.PP Quatrix does not support hashes, so you cannot use the \f[C]--checksum\f[R] flag. -T} -T{ -### Restricted filename characters -T} -T{ +.SS Restricted filename characters +.PP File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to \f[C].\f[R] or \f[C]..\f[R] nor contain \f[C]/\f[R] , \f[C]\[rs]\f[R] or non-printable ascii. -T} -T{ -### Transfers -T} -T{ +.SS Transfers +.PP For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to \f[C]--transfers\f[R] chunks at the same time (shared among all multipart uploads). @@ -50567,150 +54307,155 @@ increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal \f[C]minimal_chunk_size\f[R]. -T} -T{ -### Deleting files -T} -T{ +.SS Deleting files +.PP Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account. -T} -T{ -### Standard options -T} -T{ +.SS Standard options +.PP Here are the Standard options specific to quatrix (Quatrix by Maytech). -T} -T{ -#### --quatrix-api-key -T} -T{ +.SS --quatrix-api-key +.PP API key for accessing Quatrix account -T} -T{ +.PP Properties: -T} -T{ -- Config: api_key - Env Var: RCLONE_QUATRIX_API_KEY - Type: string - +.IP \[bu] 2 +Config: api_key +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_API_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 Required: true -T} -T{ -#### --quatrix-host -T} -T{ +.SS --quatrix-host +.PP Host name of Quatrix account -T} -T{ +.PP Properties: -T} -T{ -- Config: host - Env Var: RCLONE_QUATRIX_HOST - Type: string - Required: -true -T} -T{ -### Advanced options -T} -T{ +.IP \[bu] 2 +Config: host +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_HOST +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS Advanced options +.PP Here are the Advanced options specific to quatrix (Quatrix by Maytech). -T} -T{ -#### --quatrix-encoding -T} -T{ +.SS --quatrix-encoding +.PP The encoding for the backend. -T} -T{ +.PP See the encoding section in the overview (https://rclone.org/overview/#encoding) for more info. -T} -T{ +.PP Properties: -T} -T{ -- Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: Encoding - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot -T} -T{ -#### --quatrix-effective-upload-time -T} -T{ +.SS --quatrix-effective-upload-time +.PP Wanted upload time for one chunk -T} -T{ +.PP Properties: -T} -T{ -- Config: effective_upload_time - Env Var: -RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME - Type: string - Default: -\[dq]4s\[dq] -T} -T{ -#### --quatrix-minimal-chunk-size -T} -T{ +.IP \[bu] 2 +Config: effective_upload_time +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]4s\[dq] +.SS --quatrix-minimal-chunk-size +.PP The minimal size for one chunk -T} -T{ +.PP Properties: -T} -T{ -- Config: minimal_chunk_size - Env Var: -RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE - Type: SizeSuffix - Default: 9.537Mi -T} -T{ -#### --quatrix-maximal-summary-chunk-size -T} -T{ +.IP \[bu] 2 +Config: minimal_chunk_size +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 9.537Mi +.SS --quatrix-maximal-summary-chunk-size +.PP The maximal summary for all chunks. It should not be less than \[aq]transfers\[aq]*\[aq]minimal_chunk_size\[aq] -T} -T{ +.PP Properties: -T} -T{ -- Config: maximal_summary_chunk_size - Env Var: -RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE - Type: SizeSuffix - Default: -95.367Mi -T} -T{ -#### --quatrix-hard-delete -T} -T{ -Delete files permanently rather than putting them into the trash. -T} -T{ +.IP \[bu] 2 +Config: maximal_summary_chunk_size +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 95.367Mi +.SS --quatrix-hard-delete +.PP +Delete files permanently rather than putting them into the trash +.PP Properties: -T} -T{ -- Config: hard_delete - Env Var: RCLONE_QUATRIX_HARD_DELETE - Type: bool -- Default: false -T} -T{ -## Storage usage -T} -T{ +.IP \[bu] 2 +Config: hard_delete +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_HARD_DELETE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --quatrix-skip-project-folders +.PP +Skip project folders in operations +.PP +Properties: +.IP \[bu] 2 +Config: skip_project_folders +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_SKIP_PROJECT_FOLDERS +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --quatrix-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Storage usage +.PP The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. The account limit is applied if the user has no custom storage limit. Once you\[aq]ve reached the limit, the upload of files will fail. This can be fixed by freeing up the space or increasing the quota. -T} -T{ -## Server-side operations -T} -T{ +.SS Server-side operations +.PP Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation. -T} -T{ -# Sia -T} -T{ +.SH Sia +.PP Sia (sia.tech (https://sia.tech/)) is a decentralized cloud storage platform based on the blockchain (https://wikipedia.org/wiki/Blockchain) technology. @@ -50721,27 +54466,22 @@ Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you\[aq]d better first familiarize yourself using their excellent support documentation (https://support.sia.tech/). -T} -T{ -## Introduction -T} -T{ +.SS Introduction +.PP Before you can use rclone with Sia, you will need to have a running copy of \f[C]Sia-UI\f[R] or \f[C]siad\f[R] (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started (https://sia.tech/get-started) guide and install one. -T} -T{ +.PP rclone interacts with Sia network by talking to the Sia daemon via HTTP API (https://sia.tech/docs/) which is usually available on port \f[I]9980\f[R]. By default you will run the daemon locally on the same computer so it\[aq]s safe to leave the API password blank (the API URL will be \f[C]http://127.0.0.1:9980\f[R] making external access impossible). -T} -T{ +.PP However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you\[aq]ll need to @@ -50757,8 +54497,7 @@ variable \f[C]SIA_API_PASSWORD\f[R] or text file named \f[C]apipassword\f[R] in the daemon directory. - Set rclone backend option \f[C]api_password\f[R] taking it from above locations. -T} -T{ +.PP Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command @@ -50780,2853 +54519,3879 @@ The only way to use \f[C]siad\f[R] without API password is to run it \f[B]on localhost\f[R] with command line argument \f[C]--authorize-api=false\f[R], but this is insecure and \f[B]strongly discouraged\f[R]. -T} -T{ -## Configuration -T} -T{ +.SS Configuration +.PP Here is an example of how to make a \f[C]sia\f[R] remote called \f[C]mySia\f[R]. First, run: -T} -T{ -rclone config -T} -T{ +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP This will guide you through an interactive setup process: -T} -T{ -\[ga]\[ga]\[ga] No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> mySia Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value ... -29 / Sia Decentralized Cloud \ \[dq]sia\[dq] ... -Storage> sia Sia daemon API URL, like http://sia.daemon.host:9980. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> mySia +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +\&... +29 / Sia Decentralized Cloud + \[rs] \[dq]sia\[dq] +\&... +Storage> sia +Sia daemon API URL, like http://sia.daemon.host:9980. +Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). +Keep default if Sia daemon runs on localhost. +Enter a string value. Press Enter for the default (\[dq]http://127.0.0.1:9980\[dq]). +api_url> http://127.0.0.1:9980 +Sia Daemon API Password. +Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +[mySia] +type = sia +api_url = http://127.0.0.1:9980 +api_password = *** ENCRYPTED *** +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +Once configured, you can then use \f[C]rclone\f[R] like this: +.IP \[bu] 2 +List directories in top level of your Sia storage +.IP +.nf +\f[C] +rclone lsd mySia: +\f[R] +.fi +.IP \[bu] 2 +List all the files in your Sia storage +.IP +.nf +\f[C] +rclone ls mySia: +\f[R] +.fi +.IP \[bu] 2 +Upload a local directory to the Sia directory called \f[I]backup\f[R] +.IP +.nf +\f[C] +rclone copy /home/source mySia:backup +\f[R] +.fi +.SS Standard options +.PP +Here are the Standard options specific to sia (Sia Decentralized Cloud). +.SS --sia-api-url +.PP +Sia daemon API URL, like http://sia.daemon.host:9980. +.PP Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost. -Enter a string value. -Press Enter for the default (\[dq]http://127.0.0.1:9980\[dq]). -api_url> http://127.0.0.1:9980 Sia Daemon API Password. +.PP +Properties: +.IP \[bu] 2 +Config: api_url +.IP \[bu] 2 +Env Var: RCLONE_SIA_API_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]http://127.0.0.1:9980\[dq] +.SS --sia-api-password +.PP +Sia Daemon API Password. +.PP Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank (default) y/g/n> y Enter the password: -password: Confirm the password: password: Edit advanced config? -y) Yes n) No (default) y/n> n +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: api_password +.IP \[bu] 2 +Env Var: RCLONE_SIA_API_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to sia (Sia Decentralized Cloud). +.SS --sia-user-agent +.PP +Siad User Agent +.PP +Sia daemon requires the \[aq]Sia-Agent\[aq] user agent by default for +security +.PP +Properties: +.IP \[bu] 2 +Config: user_agent +.IP \[bu] 2 +Env Var: RCLONE_SIA_USER_AGENT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]Sia-Agent\[dq] +.SS --sia-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_SIA_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot +.SS --sia-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SIA_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.IP \[bu] 2 +Modification times not supported +.IP \[bu] 2 +Checksums not supported +.IP \[bu] 2 +\f[C]rclone about\f[R] not supported +.IP \[bu] 2 +rclone can work only with \f[I]Siad\f[R] or \f[I]Sia-UI\f[R] at the +moment, the \f[B]SkyNet daemon is not supported yet.\f[R] +.IP \[bu] 2 +Sia does not allow control characters or symbols like question and pound +signs in file names. +rclone will transparently encode (https://rclone.org/overview/#encoding) +them for you, but you\[aq]d better be aware +.SH Swift +.PP +Swift refers to OpenStack Object +Storage (https://docs.openstack.org/swift/latest/). +Commercial implementations of that being: +.IP \[bu] 2 +Rackspace Cloud Files (https://www.rackspace.com/cloud/files/) +.IP \[bu] 2 +Memset Memstore (https://www.memset.com/cloud/storage/) +.IP \[bu] 2 +OVH Object +Storage (https://www.ovh.co.uk/public-cloud/storage/object-storage/) +.IP \[bu] 2 +Oracle Cloud +Storage (https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) +.IP \[bu] 2 +Blomp Cloud Storage (https://www.blomp.com/cloud-storage/) +.IP \[bu] 2 +IBM Bluemix Cloud ObjectStorage +Swift (https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) +.PP +Paths are specified as \f[C]remote:container\f[R] (or \f[C]remote:\f[R] +for the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. +\f[C]remote:container/path/to/dir\f[R]. +.SS Configuration +.PP +Here is an example of making a swift configuration. +First run +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH) + \[rs] \[dq]swift\[dq] +[snip] +Storage> swift +Get swift credentials from environment variables in standard OpenStack form. +Choose a number from below, or type in your own value + 1 / Enter swift credentials in the next step + \[rs] \[dq]false\[dq] + 2 / Get swift credentials from environment vars. Leave other fields blank if using this. + \[rs] \[dq]true\[dq] +env_auth> true +User name to log in (OS_USERNAME). +user> +API key or password (OS_PASSWORD). +key> +Authentication URL for server (OS_AUTH_URL). +Choose a number from below, or type in your own value + 1 / Rackspace US + \[rs] \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] + 2 / Rackspace UK + \[rs] \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] + 3 / Rackspace v2 + \[rs] \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] + 4 / Memset Memstore UK + \[rs] \[dq]https://auth.storage.memset.com/v1.0\[dq] + 5 / Memset Memstore UK v2 + \[rs] \[dq]https://auth.storage.memset.com/v2.0\[dq] + 6 / OVH + \[rs] \[dq]https://auth.cloud.ovh.net/v3\[dq] + 7 / Blomp Cloud Storage + \[rs] \[dq]https://authenticate.ain.net\[dq] +auth> +User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). +user_id> +User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) +domain> +Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) +tenant> +Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) +tenant_id> +Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) +tenant_domain> +Region name - optional (OS_REGION_NAME) +region> +Storage URL - optional (OS_STORAGE_URL) +storage_url> +Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) +auth_token> +AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) +auth_version> +Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) +Choose a number from below, or type in your own value + 1 / Public (default, choose this if not sure) + \[rs] \[dq]public\[dq] + 2 / Internal (use internal service net) + \[rs] \[dq]internal\[dq] + 3 / Admin + \[rs] \[dq]admin\[dq] +endpoint_type> +Remote config +-------------------- +[test] +env_auth = true +user = +key = +auth = +user_id = +domain = +tenant = +tenant_id = +tenant_domain = +region = +storage_url = +auth_token = +auth_version = +endpoint_type = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This remote is called \f[C]remote\f[R] and can now be used like this +.PP +See all containers +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +Make a new container +.IP +.nf +\f[C] +rclone mkdir remote:container +\f[R] +.fi +.PP +List the contents of a container +.IP +.nf +\f[C] +rclone ls remote:container +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote container, deleting +any excess files in the container. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory remote:container +\f[R] +.fi +.SS Configuration from an OpenStack credentials file +.PP +An OpenStack credentials file typically looks something something like +this (without the comments) +.IP +.nf +\f[C] +export OS_AUTH_URL=https://a.provider.net/v2.0 +export OS_TENANT_ID=ffffffffffffffffffffffffffffffff +export OS_TENANT_NAME=\[dq]1234567890123456\[dq] +export OS_USERNAME=\[dq]123abc567xy\[dq] +echo \[dq]Please enter your OpenStack Password: \[dq] +read -sr OS_PASSWORD_INPUT +export OS_PASSWORD=$OS_PASSWORD_INPUT +export OS_REGION_NAME=\[dq]SBG1\[dq] +if [ -z \[dq]$OS_REGION_NAME\[dq] ]; then unset OS_REGION_NAME; fi +\f[R] +.fi +.PP +The config file needs to look something like this where +\f[C]$OS_USERNAME\f[R] represents the value of the \f[C]OS_USERNAME\f[R] +variable - \f[C]123abc567xy\f[R] in the example above. +.IP +.nf +\f[C] +[remote] +type = swift +user = $OS_USERNAME +key = $OS_PASSWORD +auth = $OS_AUTH_URL +tenant = $OS_TENANT_NAME +\f[R] +.fi +.PP +Note that you may (or may not) need to set \f[C]region\f[R] too - try +without first. +.SS Configuration from the environment +.PP +If you prefer you can configure rclone to use swift using a standard set +of OpenStack environment variables. +.PP +When you run through the config, make sure you choose \f[C]true\f[R] for +\f[C]env_auth\f[R] and leave everything else blank. +.PP +rclone will then set any empty config parameters from the environment +using standard OpenStack environment variables. +There is a list of the +variables (https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) +in the docs for the swift library. +.SS Using an alternate authentication method +.PP +If your OpenStack installation uses a non-standard authentication method +that might not be yet supported by rclone or the underlying swift +library, you can authenticate externally (e.g. +calling manually the \f[C]openstack\f[R] commands to get a token). +Then, you just need to pass the two configuration variables +\f[C]auth_token\f[R] and \f[C]storage_url\f[R]. +If they are both provided, the other variables are ignored. +rclone will not try to authenticate but instead assume it is already +authenticated and use these two variables to access the OpenStack +installation. +.SS Using rclone without a config file +.PP +You can use rclone with swift without a config file, if desired, like +this: +.IP +.nf +\f[C] +source openstack-credentials-file +export RCLONE_CONFIG_MYREMOTE_TYPE=swift +export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true +rclone lsd myremote: +\f[R] +.fi +.SS --fast-list +.PP +This remote supports \f[C]--fast-list\f[R] which allows you to use fewer +transactions in exchange for more memory. +See the rclone docs (https://rclone.org/docs/#fast-list) for more +details. +.SS --update and --use-server-modtime +.PP +As noted below, the modified time is stored on metadata on the object. +It is used by default for all operations that require checking the time +a file was last updated. +It allows rclone to treat the remote more like a true filesystem, but it +is inefficient because it requires an extra API call to retrieve the +metadata. +.PP +For many operations, the time the object was last uploaded to the remote +is sufficient to determine if it is \[dq]dirty\[dq]. +By using \f[C]--update\f[R] along with \f[C]--use-server-modtime\f[R], +you can avoid the extra API call and simply upload files whose local +modtime is newer than the time it was last uploaded. +.SS Modification times and hashes +.PP +The modified time is stored as metadata on the object as +\f[C]X-Object-Meta-Mtime\f[R] as floating point since the epoch accurate +to 1 ns. +.PP +This is a de facto standard (used in the official python-swiftclient +amongst others) for storing the modification time for an object. +.PP +The MD5 hash algorithm is supported. +.SS Restricted filename characters +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +NUL +T}@T{ +0x00 +T}@T{ +\[u2400] +T} +T{ +/ +T}@T{ +0x2F +T}@T{ +\[uFF0F] T} .TE .PP -[mySia] type = sia api_url = http://127.0.0.1:9980 api_password = *** -ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit -this remote d) Delete this remote y/e/d> y -.IP -.nf -\f[C] -Once configured, you can then use \[ga]rclone\[ga] like this: - -- List directories in top level of your Sia storage -\f[R] -.fi +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options .PP -rclone lsd mySia: -.IP -.nf -\f[C] -- List all the files in your Sia storage -\f[R] -.fi +Here are the Standard options specific to swift (OpenStack Swift +(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). +.SS --swift-env-auth .PP -rclone ls mySia: -.IP -.nf -\f[C] -- Upload a local directory to the Sia directory called _backup_ -\f[R] -.fi +Get swift credentials from environment variables in standard OpenStack +form. .PP -rclone copy /home/source mySia:backup -.IP -.nf -\f[C] - -### Standard options - -Here are the Standard options specific to sia (Sia Decentralized Cloud). - -#### --sia-api-url - -Sia daemon API URL, like http://sia.daemon.host:9980. - -Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). -Keep default if Sia daemon runs on localhost. - Properties: - -- Config: api_url -- Env Var: RCLONE_SIA_API_URL -- Type: string -- Default: \[dq]http://127.0.0.1:9980\[dq] - -#### --sia-api-password - -Sia Daemon API Password. - -Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - -Properties: - -- Config: api_password -- Env Var: RCLONE_SIA_API_PASSWORD -- Type: string -- Required: false - -### Advanced options - -Here are the Advanced options specific to sia (Sia Decentralized Cloud). - -#### --sia-user-agent - -Siad User Agent - -Sia daemon requires the \[aq]Sia-Agent\[aq] user agent by default for security - -Properties: - -- Config: user_agent -- Env Var: RCLONE_SIA_USER_AGENT -- Type: string -- Default: \[dq]Sia-Agent\[dq] - -#### --sia-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_SIA_ENCODING -- Type: Encoding -- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot - - - -## Limitations - -- Modification times not supported -- Checksums not supported -- \[ga]rclone about\[ga] not supported -- rclone can work only with _Siad_ or _Sia-UI_ at the moment, - the **SkyNet daemon is not supported yet.** -- Sia does not allow control characters or symbols like question and pound - signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding) - them for you, but you\[aq]d better be aware - -# Swift - -Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/). -Commercial implementations of that being: - - * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) - * [Memset Memstore](https://www.memset.com/cloud/storage/) - * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/) - * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) - * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/) - * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) - -Paths are specified as \[ga]remote:container\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] -command.) You may put subdirectories in too, e.g. \[ga]remote:container/path/to/dir\[ga]. - -## Configuration - -Here is an example of making a swift configuration. First run - - rclone config - -This will guide you through an interactive setup process. -\f[R] -.fi -.PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / -OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset -Memstore, OVH) \ \[dq]swift\[dq] [snip] Storage> swift Get swift -credentials from environment variables in standard OpenStack form. -Choose a number from below, or type in your own value 1 / Enter swift -credentials in the next step \ \[dq]false\[dq] 2 / Get swift credentials -from environment vars. +.IP \[bu] 2 +Config: env_auth +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_ENV_AUTH +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]false\[dq] +.RS 2 +.IP \[bu] 2 +Enter swift credentials in the next step. +.RE +.IP \[bu] 2 +\[dq]true\[dq] +.RS 2 +.IP \[bu] 2 +Get swift credentials from environment vars. +.IP \[bu] 2 Leave other fields blank if using this. -\ \[dq]true\[dq] env_auth> true User name to log in (OS_USERNAME). -user> API key or password (OS_PASSWORD). -key> Authentication URL for server (OS_AUTH_URL). -Choose a number from below, or type in your own value 1 / Rackspace US -\ \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] 2 / Rackspace UK -\ \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] 3 / Rackspace -v2 \ \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] 4 / Memset -Memstore UK \ \[dq]https://auth.storage.memset.com/v1.0\[dq] 5 / Memset -Memstore UK v2 \ \[dq]https://auth.storage.memset.com/v2.0\[dq] 6 / OVH -\ \[dq]https://auth.cloud.ovh.net/v3\[dq] 7 / Blomp Cloud Storage -\ \[dq]https://authenticate.ain.net\[dq] auth> User ID to log in - -optional - most swift systems use user and leave this blank (v3 auth) -(OS_USER_ID). -user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> -Tenant name - optional for v1 auth, this or tenant_id required otherwise -(OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 -auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant -domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> -Region name - optional (OS_REGION_NAME) region> Storage URL - optional -(OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - -optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to -(1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> -Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) -Choose a number from below, or type in your own value 1 / Public -(default, choose this if not sure) \ \[dq]public\[dq] 2 / Internal (use -internal service net) \ \[dq]internal\[dq] 3 / Admin \ \[dq]admin\[dq] -endpoint_type> Remote config -------------------- [test] env_auth = true -user = key = auth = user_id = domain = tenant = tenant_id = -tenant_domain = region = storage_url = auth_token = auth_version = -endpoint_type = -------------------- y) Yes this is OK e) Edit this -remote d) Delete this remote y/e/d> y -.IP -.nf -\f[C] -This remote is called \[ga]remote\[ga] and can now be used like this - -See all containers - - rclone lsd remote: - -Make a new container - - rclone mkdir remote:container - -List the contents of a container - - rclone ls remote:container - -Sync \[ga]/home/local/directory\[ga] to the remote container, deleting any -excess files in the container. - - rclone sync --interactive /home/local/directory remote:container - -### Configuration from an OpenStack credentials file - -An OpenStack credentials file typically looks something something -like this (without the comments) -\f[R] -.fi +.RE +.RE +.SS --swift-user .PP -export OS_AUTH_URL=https://a.provider.net/v2.0 export -OS_TENANT_ID=ffffffffffffffffffffffffffffffff export -OS_TENANT_NAME=\[dq]1234567890123456\[dq] export -OS_USERNAME=\[dq]123abc567xy\[dq] echo \[dq]Please enter your OpenStack -Password: \[dq] read -sr OS_PASSWORD_INPUT export -OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME=\[dq]SBG1\[dq] if [ -z \[dq]$OS_REGION_NAME\[dq] -]; then unset OS_REGION_NAME; fi -.IP -.nf -\f[C] -The config file needs to look something like this where \[ga]$OS_USERNAME\[ga] -represents the value of the \[ga]OS_USERNAME\[ga] variable - \[ga]123abc567xy\[ga] in -the example above. -\f[R] -.fi -.PP -[remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = -$OS_AUTH_URL tenant = $OS_TENANT_NAME -.IP -.nf -\f[C] -Note that you may (or may not) need to set \[ga]region\[ga] too - try without first. - -### Configuration from the environment - -If you prefer you can configure rclone to use swift using a standard -set of OpenStack environment variables. - -When you run through the config, make sure you choose \[ga]true\[ga] for -\[ga]env_auth\[ga] and leave everything else blank. - -rclone will then set any empty config parameters from the environment -using standard OpenStack environment variables. There is [a list of -the -variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) -in the docs for the swift library. - -### Using an alternate authentication method - -If your OpenStack installation uses a non-standard authentication method -that might not be yet supported by rclone or the underlying swift library, -you can authenticate externally (e.g. calling manually the \[ga]openstack\[ga] -commands to get a token). Then, you just need to pass the two -configuration variables \[ga]\[ga]auth_token\[ga]\[ga] and \[ga]\[ga]storage_url\[ga]\[ga]. -If they are both provided, the other variables are ignored. rclone will -not try to authenticate but instead assume it is already authenticated -and use these two variables to access the OpenStack installation. - -#### Using rclone without a config file - -You can use rclone with swift without a config file, if desired, like -this: -\f[R] -.fi -.PP -source openstack-credentials-file export -RCLONE_CONFIG_MYREMOTE_TYPE=swift export -RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote: -.IP -.nf -\f[C] -### --fast-list - -This remote supports \[ga]--fast-list\[ga] which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](https://rclone.org/docs/#fast-list) for more details. - -### --update and --use-server-modtime - -As noted below, the modified time is stored on metadata on the object. It is -used by default for all operations that require checking the time a file was -last updated. It allows rclone to treat the remote more like a true filesystem, -but it is inefficient because it requires an extra API call to retrieve the -metadata. - -For many operations, the time the object was last uploaded to the remote is -sufficient to determine if it is \[dq]dirty\[dq]. By using \[ga]--update\[ga] along with -\[ga]--use-server-modtime\[ga], you can avoid the extra API call and simply upload -files whose local modtime is newer than the time it was last uploaded. - -### Modification times and hashes - -The modified time is stored as metadata on the object as -\[ga]X-Object-Meta-Mtime\[ga] as floating point since the epoch accurate to 1 -ns. - -This is a de facto standard (used in the official python-swiftclient -amongst others) for storing the modification time for an object. - -The MD5 hash algorithm is supported. - -### Restricted filename characters - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| NUL | 0x00 | \[u2400] | -| / | 0x2F | \[uFF0F] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - -Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). - -#### --swift-env-auth - -Get swift credentials from environment variables in standard OpenStack form. - -Properties: - -- Config: env_auth -- Env Var: RCLONE_SWIFT_ENV_AUTH -- Type: bool -- Default: false -- Examples: - - \[dq]false\[dq] - - Enter swift credentials in the next step. - - \[dq]true\[dq] - - Get swift credentials from environment vars. - - Leave other fields blank if using this. - -#### --swift-user - User name to log in (OS_USERNAME). - +.PP Properties: - -- Config: user -- Env Var: RCLONE_SWIFT_USER -- Type: string -- Required: false - -#### --swift-key - +.IP \[bu] 2 +Config: user +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_USER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-key +.PP API key or password (OS_PASSWORD). - +.PP Properties: - -- Config: key -- Env Var: RCLONE_SWIFT_KEY -- Type: string -- Required: false - -#### --swift-auth - +.IP \[bu] 2 +Config: key +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-auth +.PP Authentication URL for server (OS_AUTH_URL). - +.PP Properties: - -- Config: auth -- Env Var: RCLONE_SWIFT_AUTH -- Type: string -- Required: false -- Examples: - - \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] - - Rackspace US - - \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] - - Rackspace UK - - \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] - - Rackspace v2 - - \[dq]https://auth.storage.memset.com/v1.0\[dq] - - Memset Memstore UK - - \[dq]https://auth.storage.memset.com/v2.0\[dq] - - Memset Memstore UK v2 - - \[dq]https://auth.cloud.ovh.net/v3\[dq] - - OVH - - \[dq]https://authenticate.ain.net\[dq] - - Blomp Cloud Storage - -#### --swift-user-id - -User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - +.IP \[bu] 2 +Config: auth +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_AUTH +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] +.RS 2 +.IP \[bu] 2 +Rackspace US +.RE +.IP \[bu] 2 +\[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] +.RS 2 +.IP \[bu] 2 +Rackspace UK +.RE +.IP \[bu] 2 +\[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] +.RS 2 +.IP \[bu] 2 +Rackspace v2 +.RE +.IP \[bu] 2 +\[dq]https://auth.storage.memset.com/v1.0\[dq] +.RS 2 +.IP \[bu] 2 +Memset Memstore UK +.RE +.IP \[bu] 2 +\[dq]https://auth.storage.memset.com/v2.0\[dq] +.RS 2 +.IP \[bu] 2 +Memset Memstore UK v2 +.RE +.IP \[bu] 2 +\[dq]https://auth.cloud.ovh.net/v3\[dq] +.RS 2 +.IP \[bu] 2 +OVH +.RE +.IP \[bu] 2 +\[dq]https://authenticate.ain.net\[dq] +.RS 2 +.IP \[bu] 2 +Blomp Cloud Storage +.RE +.RE +.SS --swift-user-id +.PP +User ID to log in - optional - most swift systems use user and leave +this blank (v3 auth) (OS_USER_ID). +.PP Properties: - -- Config: user_id -- Env Var: RCLONE_SWIFT_USER_ID -- Type: string -- Required: false - -#### --swift-domain - +.IP \[bu] 2 +Config: user_id +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_USER_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-domain +.PP User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - +.PP Properties: - -- Config: domain -- Env Var: RCLONE_SWIFT_DOMAIN -- Type: string -- Required: false - -#### --swift-tenant - -Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME). - +.IP \[bu] 2 +Config: domain +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_DOMAIN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-tenant +.PP +Tenant name - optional for v1 auth, this or tenant_id required otherwise +(OS_TENANT_NAME or OS_PROJECT_NAME). +.PP Properties: - -- Config: tenant -- Env Var: RCLONE_SWIFT_TENANT -- Type: string -- Required: false - -#### --swift-tenant-id - -Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID). - +.IP \[bu] 2 +Config: tenant +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_TENANT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-tenant-id +.PP +Tenant ID - optional for v1 auth, this or tenant required otherwise +(OS_TENANT_ID). +.PP Properties: - -- Config: tenant_id -- Env Var: RCLONE_SWIFT_TENANT_ID -- Type: string -- Required: false - -#### --swift-tenant-domain - +.IP \[bu] 2 +Config: tenant_id +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_TENANT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-tenant-domain +.PP Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME). - +.PP Properties: - -- Config: tenant_domain -- Env Var: RCLONE_SWIFT_TENANT_DOMAIN -- Type: string -- Required: false - -#### --swift-region - +.IP \[bu] 2 +Config: tenant_domain +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_TENANT_DOMAIN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-region +.PP Region name - optional (OS_REGION_NAME). - +.PP Properties: - -- Config: region -- Env Var: RCLONE_SWIFT_REGION -- Type: string -- Required: false - -#### --swift-storage-url - +.IP \[bu] 2 +Config: region +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_REGION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-storage-url +.PP Storage URL - optional (OS_STORAGE_URL). - +.PP Properties: - -- Config: storage_url -- Env Var: RCLONE_SWIFT_STORAGE_URL -- Type: string -- Required: false - -#### --swift-auth-token - +.IP \[bu] 2 +Config: storage_url +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_STORAGE_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-auth-token +.PP Auth Token from alternate authentication - optional (OS_AUTH_TOKEN). - +.PP Properties: - -- Config: auth_token -- Env Var: RCLONE_SWIFT_AUTH_TOKEN -- Type: string -- Required: false - -#### --swift-application-credential-id - +.IP \[bu] 2 +Config: auth_token +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_AUTH_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-application-credential-id +.PP Application Credential ID (OS_APPLICATION_CREDENTIAL_ID). - +.PP Properties: - -- Config: application_credential_id -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID -- Type: string -- Required: false - -#### --swift-application-credential-name - +.IP \[bu] 2 +Config: application_credential_id +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-application-credential-name +.PP Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME). - +.PP Properties: - -- Config: application_credential_name -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME -- Type: string -- Required: false - -#### --swift-application-credential-secret - +.IP \[bu] 2 +Config: application_credential_name +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-application-credential-secret +.PP Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET). - +.PP Properties: - -- Config: application_credential_secret -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET -- Type: string -- Required: false - -#### --swift-auth-version - -AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION). - +.IP \[bu] 2 +Config: application_credential_secret +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-auth-version +.PP +AuthVersion - optional - set to (1,2,3) if your auth URL has no version +(ST_AUTH_VERSION). +.PP Properties: - -- Config: auth_version -- Env Var: RCLONE_SWIFT_AUTH_VERSION -- Type: int -- Default: 0 - -#### --swift-endpoint-type - +.IP \[bu] 2 +Config: auth_version +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_AUTH_VERSION +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 0 +.SS --swift-endpoint-type +.PP Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE). - +.PP Properties: - -- Config: endpoint_type -- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE -- Type: string -- Default: \[dq]public\[dq] -- Examples: - - \[dq]public\[dq] - - Public (default, choose this if not sure) - - \[dq]internal\[dq] - - Internal (use internal service net) - - \[dq]admin\[dq] - - Admin - -#### --swift-storage-policy - +.IP \[bu] 2 +Config: endpoint_type +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_ENDPOINT_TYPE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]public\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]public\[dq] +.RS 2 +.IP \[bu] 2 +Public (default, choose this if not sure) +.RE +.IP \[bu] 2 +\[dq]internal\[dq] +.RS 2 +.IP \[bu] 2 +Internal (use internal service net) +.RE +.IP \[bu] 2 +\[dq]admin\[dq] +.RS 2 +.IP \[bu] 2 +Admin +.RE +.RE +.SS --swift-storage-policy +.PP The storage policy to use when creating a new container. - -This applies the specified storage policy when creating a new -container. The policy cannot be changed afterwards. The allowed -configuration values and their meaning depend on your Swift storage -provider. - +.PP +This applies the specified storage policy when creating a new container. +The policy cannot be changed afterwards. +The allowed configuration values and their meaning depend on your Swift +storage provider. +.PP Properties: - -- Config: storage_policy -- Env Var: RCLONE_SWIFT_STORAGE_POLICY -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - Default - - \[dq]pcs\[dq] - - OVH Public Cloud Storage - - \[dq]pca\[dq] - - OVH Public Cloud Archive - -### Advanced options - -Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). - -#### --swift-leave-parts-on-error - +.IP \[bu] 2 +Config: storage_policy +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_STORAGE_POLICY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +Default +.RE +.IP \[bu] 2 +\[dq]pcs\[dq] +.RS 2 +.IP \[bu] 2 +OVH Public Cloud Storage +.RE +.IP \[bu] 2 +\[dq]pca\[dq] +.RS 2 +.IP \[bu] 2 +OVH Public Cloud Archive +.RE +.RE +.SS Advanced options +.PP +Here are the Advanced options specific to swift (OpenStack Swift +(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). +.SS --swift-leave-parts-on-error +.PP If true avoid calling abort upload on a failure. - +.PP It should be set to true for resuming uploads across different sessions. - +.PP Properties: - -- Config: leave_parts_on_error -- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR -- Type: bool -- Default: false - -#### --swift-chunk-size - +.IP \[bu] 2 +Config: leave_parts_on_error +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --swift-chunk-size +.PP Above this size files will be chunked into a _segments container. - -Above this size files will be chunked into a _segments container. The -default for this is 5 GiB which is its maximum value. - +.PP +Above this size files will be chunked into a _segments container. +The default for this is 5 GiB which is its maximum value. +.PP Properties: - -- Config: chunk_size -- Env Var: RCLONE_SWIFT_CHUNK_SIZE -- Type: SizeSuffix -- Default: 5Gi - -#### --swift-no-chunk - +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 5Gi +.SS --swift-no-chunk +.PP Don\[aq]t chunk files during streaming upload. - -When doing streaming uploads (e.g. using rcat or mount) setting this -flag will cause the swift backend to not upload chunked files. - -This will limit the maximum upload size to 5 GiB. However non chunked -files are easier to deal with and have an MD5SUM. - +.PP +When doing streaming uploads (e.g. +using rcat or mount) setting this flag will cause the swift backend to +not upload chunked files. +.PP +This will limit the maximum upload size to 5 GiB. +However non chunked files are easier to deal with and have an MD5SUM. +.PP Rclone will still chunk files bigger than chunk_size when doing normal copy operations. - +.PP Properties: - -- Config: no_chunk -- Env Var: RCLONE_SWIFT_NO_CHUNK -- Type: bool -- Default: false - -#### --swift-no-large-objects - +.IP \[bu] 2 +Config: no_chunk +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_NO_CHUNK +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --swift-no-large-objects +.PP Disable support for static and dynamic large objects - -Swift cannot transparently store files bigger than 5 GiB. There are -two schemes for doing that, static or dynamic large objects, and the -API does not allow rclone to determine whether a file is a static or -dynamic large object without doing a HEAD on the object. Since these -need to be treated differently, this means rclone has to issue HEAD -requests for objects for example when reading checksums. - -When \[ga]no_large_objects\[ga] is set, rclone will assume that there are no -static or dynamic large objects stored. This means it can stop doing -the extra HEAD calls which in turn increases performance greatly -especially when doing a swift to swift transfer with \[ga]--checksum\[ga] set. - -Setting this option implies \[ga]no_chunk\[ga] and also that no files will be -uploaded in chunks, so files bigger than 5 GiB will just fail on +.PP +Swift cannot transparently store files bigger than 5 GiB. +There are two schemes for doing that, static or dynamic large objects, +and the API does not allow rclone to determine whether a file is a +static or dynamic large object without doing a HEAD on the object. +Since these need to be treated differently, this means rclone has to +issue HEAD requests for objects for example when reading checksums. +.PP +When \f[C]no_large_objects\f[R] is set, rclone will assume that there +are no static or dynamic large objects stored. +This means it can stop doing the extra HEAD calls which in turn +increases performance greatly especially when doing a swift to swift +transfer with \f[C]--checksum\f[R] set. +.PP +Setting this option implies \f[C]no_chunk\f[R] and also that no files +will be uploaded in chunks, so files bigger than 5 GiB will just fail on upload. - -If you set this option and there *are* static or dynamic large objects, -then this will give incorrect hashes for them. Downloads will succeed, -but other operations such as Remove and Copy will fail. - - +.PP +If you set this option and there \f[I]are\f[R] static or dynamic large +objects, then this will give incorrect hashes for them. +Downloads will succeed, but other operations such as Remove and Copy +will fail. +.PP Properties: - -- Config: no_large_objects -- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS -- Type: bool -- Default: false - -#### --swift-encoding - +.IP \[bu] 2 +Config: no_large_objects +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --swift-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_SWIFT_ENCODING -- Type: Encoding -- Default: Slash,InvalidUtf8 - - - -## Limitations - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,InvalidUtf8 +.SS --swift-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP The Swift API doesn\[aq]t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won\[aq]t check or use the MD5SUM for these. - -## Troubleshooting - -### Rclone gives Failed to create file system for \[dq]remote:\[dq]: Bad Request - +.SS Troubleshooting +.SS Rclone gives Failed to create file system for \[dq]remote:\[dq]: Bad Request +.PP Due to an oddity of the underlying swift library, it gives a \[dq]Bad Request\[dq] error rather than a more sensible error when the authentication fails for Swift. - -So this most likely means your username / password is wrong. You can -investigate further with the \[ga]--dump-bodies\[ga] flag. - +.PP +So this most likely means your username / password is wrong. +You can investigate further with the \f[C]--dump-bodies\f[R] flag. +.PP This may also be caused by specifying the region when you shouldn\[aq]t -have (e.g. OVH). - -### Rclone gives Failed to create file system: Response didn\[aq]t have storage url and auth token - +have (e.g. +OVH). +.SS Rclone gives Failed to create file system: Response didn\[aq]t have storage url and auth token +.PP This is most likely caused by forgetting to specify your tenant when setting up a swift remote. - -## OVH Cloud Archive - -To use rclone with OVH cloud archive, first use \[ga]rclone config\[ga] to set up a \[ga]swift\[ga] backend with OVH, choosing \[ga]pca\[ga] as the \[ga]storage_policy\[ga]. - -### Uploading Objects - -Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a \[dq]Frozen\[dq] state within the OVH control panel. - -### Retrieving Objects - -To retrieve objects use \[ga]rclone copy\[ga] as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: - -\[ga]2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)\[ga] - -Rclone will wait for the time specified then retry the copy. - -# pCloud - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -The initial setup for pCloud involves getting a token from pCloud which you -need to do in your browser. \[ga]rclone config\[ga] walks you through it. - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi +.SS OVH Cloud Archive .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / Pcloud -\ \[dq]pcloud\[dq] [snip] Storage> pcloud Pcloud App Client Id - leave -blank normally. -client_id> Pcloud App Client Secret - leave blank normally. -client_secret> Remote config Use web browser to automatically -authenticate rclone with remote? -* Say Y if the machine running rclone has a web browser you can use * -Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. -If Y failed, try N. -y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to -the following link: http://127.0.0.1:53682/auth Log in and authorize -rclone for access Waiting for code... -Got code -------------------- [remote] client_id = client_secret = token -= -{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +To use rclone with OVH cloud archive, first use \f[C]rclone config\f[R] +to set up a \f[C]swift\f[R] backend with OVH, choosing \f[C]pca\f[R] as +the \f[C]storage_policy\f[R]. +.SS Uploading Objects +.PP +Uploading objects to OVH cloud archive is no different to object +storage, you just simply run the command you like (move, copy or sync) +to upload the objects. +Once uploaded the objects will show in a \[dq]Frozen\[dq] state within +the OVH control panel. +.SS Retrieving Objects +.PP +To retrieve objects use \f[C]rclone copy\f[R] as normal. +If the objects are in a frozen state then rclone will ask for them all +to be unfrozen and it will wait at the end of the output with a message +like the following: +.PP +\f[C]2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)\f[R] +.PP +Rclone will wait for the time specified then retry the copy. +.SH pCloud +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration +.PP +The initial setup for pCloud involves getting a token from pCloud which +you need to do in your browser. +\f[C]rclone config\f[R] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: .IP .nf \f[C] -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from pCloud. This only runs from the moment it opens -your browser to the moment you get back the verification code. This -is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require you to unblock -it temporarily if you are running a host firewall. - -Once configured you can then use \[ga]rclone\[ga] like this, - -List directories in top level of your pCloud - - rclone lsd remote: - -List all the files in your pCloud - - rclone ls remote: - -To copy a local directory to a pCloud directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - -pCloud allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. In order to set a Modification time pCloud requires the object -be re-uploaded. - -pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 -hashes in the EU region, so you can use the \[ga]--checksum\[ga] flag. - -### Restricted filename characters - -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) -the following characters are also replaced: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| \[rs] | 0x5C | \[uFF3C] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - -### Deleting files - -Deleted files will be moved to the trash. Your subscription level -will determine how long items stay in the trash. \[ga]rclone cleanup\[ga] can -be used to empty the trash. - -### Emptying the trash - -Due to an API limitation, the \[ga]rclone cleanup\[ga] command will only work if you -set your username and password in the advanced options for this backend. -Since we generally want to avoid storing user passwords in the rclone config -file, we advise you to only set this up if you need the \[ga]rclone cleanup\[ga] command to work. - -### Root folder ID - -You can set the \[ga]root_folder_id\[ga] for rclone. This is the directory -(identified by its \[ga]Folder ID\[ga]) that rclone considers to be the root -of your pCloud drive. - -Normally you will leave this blank and rclone will determine the -correct root to use itself. - -However you can set this to restrict rclone to a specific folder -hierarchy. - -In order to do this you will have to find the \[ga]Folder ID\[ga] of the -directory you wish rclone to display. This will be the \[ga]folder\[ga] field -of the URL when you open the relevant folder in the pCloud web -interface. - -So if the folder you want rclone to use has a URL which looks like -\[ga]https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid\[ga] -in the browser, then you use \[ga]5xxxxxxxx8\[ga] as -the \[ga]root_folder_id\[ga] in the config. - - -### Standard options - -Here are the Standard options specific to pcloud (Pcloud). - -#### --pcloud-client-id - -OAuth Client Id. - -Leave blank normally. - -Properties: - -- Config: client_id -- Env Var: RCLONE_PCLOUD_CLIENT_ID -- Type: string -- Required: false - -#### --pcloud-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_PCLOUD_CLIENT_SECRET -- Type: string -- Required: false - -### Advanced options - -Here are the Advanced options specific to pcloud (Pcloud). - -#### --pcloud-token - -OAuth Access Token as a JSON blob. - -Properties: - -- Config: token -- Env Var: RCLONE_PCLOUD_TOKEN -- Type: string -- Required: false - -#### --pcloud-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_PCLOUD_AUTH_URL -- Type: string -- Required: false - -#### --pcloud-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_PCLOUD_TOKEN_URL -- Type: string -- Required: false - -#### --pcloud-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_PCLOUD_ENCODING -- Type: Encoding -- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - -#### --pcloud-root-folder-id - -Fill in for rclone to use a non root folder as its starting point. - -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID -- Type: string -- Default: \[dq]d0\[dq] - -#### --pcloud-hostname - -Hostname to connect to. - -This is normally set when rclone initially does the oauth connection, -however you will need to set it by hand if you are using remote config -with rclone authorize. - - -Properties: - -- Config: hostname -- Env Var: RCLONE_PCLOUD_HOSTNAME -- Type: string -- Default: \[dq]api.pcloud.com\[dq] -- Examples: - - \[dq]api.pcloud.com\[dq] - - Original/US region - - \[dq]eapi.pcloud.com\[dq] - - EU region - -#### --pcloud-username - -Your pcloud username. - -This is only required when you want to use the cleanup command. Due to a bug -in the pcloud API the required API does not support OAuth authentication so -we have to rely on user password authentication for it. - -Properties: - -- Config: username -- Env Var: RCLONE_PCLOUD_USERNAME -- Type: string -- Required: false - -#### --pcloud-password - -Your pcloud password. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - -Properties: - -- Config: password -- Env Var: RCLONE_PCLOUD_PASSWORD -- Type: string -- Required: false - - - -# PikPak - -PikPak is [a private cloud drive](https://mypikpak.com/). - -Paths are specified as \[ga]remote:path\[ga], and may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -Here is an example of making a remote for PikPak. - -First run: - - rclone config - -This will guide you through an interactive setup process: + rclone config \f[R] .fi .PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Pcloud + \[rs] \[dq]pcloud\[dq] +[snip] +Storage> pcloud +Pcloud App Client Id - leave blank normally. +client_id> +Pcloud App Client Secret - leave blank normally. +client_secret> +Remote config +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. +y) Yes +n) No +y/n> y +If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +client_id = +client_secret = +token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi .PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from pCloud. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP +List directories in top level of your pCloud +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your pCloud +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to a pCloud directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP +pCloud allows modification times to be set on objects accurate to 1 +second. +These will be used to detect whether objects need syncing or not. +In order to set a Modification time pCloud requires the object be +re-uploaded. +.PP +pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and +SHA256 hashes in the EU region, so you can use the \f[C]--checksum\f[R] +flag. +.SS Restricted filename characters +.PP +In addition to the default restricted characters +set (https://rclone.org/overview/#restricted-characters) the following +characters are also replaced: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Deleting files +.PP +Deleted files will be moved to the trash. +Your subscription level will determine how long items stay in the trash. +\f[C]rclone cleanup\f[R] can be used to empty the trash. +.SS Emptying the trash +.PP +Due to an API limitation, the \f[C]rclone cleanup\f[R] command will only +work if you set your username and password in the advanced options for +this backend. +Since we generally want to avoid storing user passwords in the rclone +config file, we advise you to only set this up if you need the +\f[C]rclone cleanup\f[R] command to work. +.SS Root folder ID +.PP +You can set the \f[C]root_folder_id\f[R] for rclone. +This is the directory (identified by its \f[C]Folder ID\f[R]) that +rclone considers to be the root of your pCloud drive. +.PP +Normally you will leave this blank and rclone will determine the correct +root to use itself. +.PP +However you can set this to restrict rclone to a specific folder +hierarchy. +.PP +In order to do this you will have to find the \f[C]Folder ID\f[R] of the +directory you wish rclone to display. +This will be the \f[C]folder\f[R] field of the URL when you open the +relevant folder in the pCloud web interface. +.PP +So if the folder you want rclone to use has a URL which looks like +\f[C]https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid\f[R] +in the browser, then you use \f[C]5xxxxxxxx8\f[R] as the +\f[C]root_folder_id\f[R] in the config. +.SS Standard options +.PP +Here are the Standard options specific to pcloud (Pcloud). +.SS --pcloud-client-id +.PP +OAuth Client Id. +.PP +Leave blank normally. +.PP +Properties: +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-client-secret +.PP +OAuth Client Secret. +.PP +Leave blank normally. +.PP +Properties: +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to pcloud (Pcloud). +.SS --pcloud-token +.PP +OAuth Access Token as a JSON blob. +.PP +Properties: +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-auth-url +.PP +Auth server URL. +.PP +Leave blank to use the provider defaults. +.PP +Properties: +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-token-url +.PP +Token server url. +.PP +Leave blank to use the provider defaults. +.PP +Properties: +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +.SS --pcloud-root-folder-id +.PP +Fill in for rclone to use a non root folder as its starting point. +.PP +Properties: +.IP \[bu] 2 +Config: root_folder_id +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]d0\[dq] +.SS --pcloud-hostname +.PP +Hostname to connect to. +.PP +This is normally set when rclone initially does the oauth connection, +however you will need to set it by hand if you are using remote config +with rclone authorize. +.PP +Properties: +.IP \[bu] 2 +Config: hostname +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_HOSTNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]api.pcloud.com\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]api.pcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Original/US region +.RE +.IP \[bu] 2 +\[dq]eapi.pcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +EU region +.RE +.RE +.SS --pcloud-username +.PP +Your pcloud username. +.PP +This is only required when you want to use the cleanup command. +Due to a bug in the pcloud API the required API does not support OAuth +authentication so we have to rely on user password authentication for +it. +.PP +Properties: +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-password +.PP +Your pcloud password. +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SH PikPak +.PP +PikPak is a private cloud drive (https://mypikpak.com/). +.PP +Paths are specified as \f[C]remote:path\f[R], and may be as deep as +required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration +.PP +Here is an example of making a remote for PikPak. +.PP +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + Enter name for new remote. name> remote -.PP + Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -XX / PikPak \ (pikpak) Storage> XX -.PP +XX / PikPak + \[rs] (pikpak) +Storage> XX + Option user. Pikpak username. Enter a value. user> USERNAME -.PP + Option pass. Pikpak password. Choose an alternative below. -y) Yes, type in my own password g) Generate random password y/g> y Enter -the password: password: Confirm the password: password: -.PP +y) Yes, type in my own password +g) Generate random password +y/g> y +Enter the password: +password: +Confirm the password: +password: + Edit advanced config? -y) Yes n) No (default) y/n> -.PP +y) Yes +n) No (default) +y/n> + Configuration complete. -Options: - type: pikpak - user: USERNAME - pass: *** ENCRYPTED *** - -token: -{\[dq]access_token\[dq]:\[dq]eyJ...\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]os...\[dq],\[dq]expiry\[dq]:\[dq]2023-01-26T18:54:32.170582647+09:00\[dq]} +Options: +- type: pikpak +- user: USERNAME +- pass: *** ENCRYPTED *** +- token: {\[dq]access_token\[dq]:\[dq]eyJ...\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]os...\[dq],\[dq]expiry\[dq]:\[dq]2023-01-26T18:54:32.170582647+09:00\[dq]} Keep this \[dq]remote\[dq] remote? -y) Yes this is OK (default) e) Edit this remote d) Delete this remote +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote y/e/d> y -.IP -.nf -\f[C] -### Modification times and hashes - -PikPak keeps modification times on objects, and updates them when uploading objects, -but it does not support changing only the modification time - +\f[R] +.fi +.SS Modification times and hashes +.PP +PikPak keeps modification times on objects, and updates them when +uploading objects, but it does not support changing only the +modification time +.PP The MD5 hash algorithm is supported. - - -### Standard options - +.SS Standard options +.PP Here are the Standard options specific to pikpak (PikPak). - -#### --pikpak-user - +.SS --pikpak-user +.PP Pikpak username. - +.PP Properties: - -- Config: user -- Env Var: RCLONE_PIKPAK_USER -- Type: string -- Required: true - -#### --pikpak-pass - +.IP \[bu] 2 +Config: user +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_USER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --pikpak-pass +.PP Pikpak password. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: pass -- Env Var: RCLONE_PIKPAK_PASS -- Type: string -- Required: true - -### Advanced options - +.IP \[bu] 2 +Config: pass +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS Advanced options +.PP Here are the Advanced options specific to pikpak (PikPak). - -#### --pikpak-client-id - +.SS --pikpak-client-id +.PP OAuth Client Id. - +.PP Leave blank normally. - +.PP Properties: - -- Config: client_id -- Env Var: RCLONE_PIKPAK_CLIENT_ID -- Type: string -- Required: false - -#### --pikpak-client-secret - +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-client-secret +.PP OAuth Client Secret. - +.PP Leave blank normally. - +.PP Properties: - -- Config: client_secret -- Env Var: RCLONE_PIKPAK_CLIENT_SECRET -- Type: string -- Required: false - -#### --pikpak-token - +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-token +.PP OAuth Access Token as a JSON blob. - +.PP Properties: - -- Config: token -- Env Var: RCLONE_PIKPAK_TOKEN -- Type: string -- Required: false - -#### --pikpak-auth-url - +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-auth-url +.PP Auth server URL. - +.PP Leave blank to use the provider defaults. - +.PP Properties: - -- Config: auth_url -- Env Var: RCLONE_PIKPAK_AUTH_URL -- Type: string -- Required: false - -#### --pikpak-token-url - +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-token-url +.PP Token server url. - +.PP Leave blank to use the provider defaults. - +.PP Properties: - -- Config: token_url -- Env Var: RCLONE_PIKPAK_TOKEN_URL -- Type: string -- Required: false - -#### --pikpak-root-folder-id - +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-root-folder-id +.PP ID of the root folder. Leave blank normally. - +.PP Fill in for rclone to use a non root folder as its starting point. - - +.PP Properties: - -- Config: root_folder_id -- Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID -- Type: string -- Required: false - -#### --pikpak-use-trash - +.IP \[bu] 2 +Config: root_folder_id +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-use-trash +.PP Send files to the trash instead of deleting permanently. - +.PP Defaults to true, namely sending files to the trash. -Use \[ga]--pikpak-use-trash=false\[ga] to delete files permanently instead. - +Use \f[C]--pikpak-use-trash=false\f[R] to delete files permanently +instead. +.PP Properties: - -- Config: use_trash -- Env Var: RCLONE_PIKPAK_USE_TRASH -- Type: bool -- Default: true - -#### --pikpak-trashed-only - +.IP \[bu] 2 +Config: use_trash +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_USE_TRASH +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --pikpak-trashed-only +.PP Only show files that are in the trash. - +.PP This will show trashed files in their original directory structure. - +.PP Properties: - -- Config: trashed_only -- Env Var: RCLONE_PIKPAK_TRASHED_ONLY -- Type: bool -- Default: false - -#### --pikpak-hash-memory-limit - -Files bigger than this will be cached on disk to calculate hash if required. - +.IP \[bu] 2 +Config: trashed_only +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_TRASHED_ONLY +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --pikpak-hash-memory-limit +.PP +Files bigger than this will be cached on disk to calculate hash if +required. +.PP Properties: - -- Config: hash_memory_limit -- Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT -- Type: SizeSuffix -- Default: 10Mi - -#### --pikpak-encoding - +.IP \[bu] 2 +Config: hash_memory_limit +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 10Mi +.SS --pikpak-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_PIKPAK_ENCODING -- Type: Encoding -- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot - -## Backend commands - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: +Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot +.SS --pikpak-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Backend commands +.PP Here are the commands specific to the pikpak backend. - +.PP Run them with - - rclone backend COMMAND remote: - +.IP +.nf +\f[C] +rclone backend COMMAND remote: +\f[R] +.fi +.PP The help below will explain what arguments each command takes. - -See the [backend](https://rclone.org/commands/rclone_backend/) command for more -info on how to pass options and arguments. - +.PP +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. +.PP These can be run on a running backend using the rc command -[backend/command](https://rclone.org/rc/#backend-command). - -### addurl - +backend/command (https://rclone.org/rc/#backend-command). +.SS addurl +.PP Add offline download task for url - - rclone backend addurl remote: [options] [+] - +.IP +.nf +\f[C] +rclone backend addurl remote: [options] [+] +\f[R] +.fi +.PP This command adds offline download task for url. - +.PP Usage: - - rclone backend addurl pikpak:dirpath url - -Downloads will be stored in \[aq]dirpath\[aq]. If \[aq]dirpath\[aq] is invalid, -download will fallback to default \[aq]My Pack\[aq] folder. - - -### decompress - +.IP +.nf +\f[C] +rclone backend addurl pikpak:dirpath url +\f[R] +.fi +.PP +Downloads will be stored in \[aq]dirpath\[aq]. +If \[aq]dirpath\[aq] is invalid, download will fallback to default +\[aq]My Pack\[aq] folder. +.SS decompress +.PP Request decompress of a file/files in a folder - - rclone backend decompress remote: [options] [+] - +.IP +.nf +\f[C] +rclone backend decompress remote: [options] [+] +\f[R] +.fi +.PP This command requests decompress of file/files in a folder. - +.PP Usage: - - rclone backend decompress pikpak:dirpath {filename} -o password=password - rclone backend decompress pikpak:dirpath {filename} -o delete-src-file - -An optional argument \[aq]filename\[aq] can be specified for a file located in -\[aq]pikpak:dirpath\[aq]. You may want to pass \[aq]-o password=password\[aq] for a -password-protected files. Also, pass \[aq]-o delete-src-file\[aq] to delete -source files after decompression finished. - +.IP +.nf +\f[C] +rclone backend decompress pikpak:dirpath {filename} -o password=password +rclone backend decompress pikpak:dirpath {filename} -o delete-src-file +\f[R] +.fi +.PP +An optional argument \[aq]filename\[aq] can be specified for a file +located in \[aq]pikpak:dirpath\[aq]. +You may want to pass \[aq]-o password=password\[aq] for a +password-protected files. +Also, pass \[aq]-o delete-src-file\[aq] to delete source files after +decompression finished. +.PP Result: - - { - \[dq]Decompressed\[dq]: 17, - \[dq]SourceDeleted\[dq]: 0, - \[dq]Errors\[dq]: 0 - } - - - - -## Limitations - -### Hashes may be empty - -PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files. - -### Deleted files still visible with trashed-only - -Deleted files will still be visible with \[ga]--pikpak-trashed-only\[ga] even after the -trash emptied. This goes away after few days. - -# premiumize.me - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you -need to do in your browser. \[ga]rclone config\[ga] walks you through it. - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi -.PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value [snip] XX / -premiumize.me \ \[dq]premiumizeme\[dq] [snip] Storage> premiumizeme ** -See help for premiumizeme backend at: https://rclone.org/premiumizeme/ -** -.PP -Remote config Use web browser to automatically authenticate rclone with -remote? -* Say Y if the machine running rclone has a web browser you can use * -Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. -If Y failed, try N. -y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to -the following link: http://127.0.0.1:53682/auth Log in and authorize -rclone for access Waiting for code... -Got code -------------------- [remote] type = premiumizeme token = -{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2029-08-07T18:44:15.548915378+01:00\[dq]} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> .IP .nf \f[C] -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +{ + \[dq]Decompressed\[dq]: 17, + \[dq]SourceDeleted\[dq]: 0, + \[dq]Errors\[dq]: 0 +} +\f[R] +.fi +.SS Limitations +.SS Hashes may be empty +.PP +PikPak supports MD5 hash, but sometimes given empty especially for +user-uploaded files. +.SS Deleted files still visible with trashed-only +.PP +Deleted files will still be visible with \f[C]--pikpak-trashed-only\f[R] +even after the trash emptied. +This goes away after few days. +.SH premiumize.me +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration +.PP +The initial setup for premiumize.me (https://premiumize.me/) involves +getting a token from premiumize.me which you need to do in your browser. +\f[C]rclone config\f[R] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +[snip] +XX / premiumize.me + \[rs] \[dq]premiumizeme\[dq] +[snip] +Storage> premiumizeme +** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** +Remote config +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. +y) Yes +n) No +y/n> y +If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +type = premiumizeme +token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2029-08-07T18:44:15.548915378+01:00\[dq]} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> +\f[R] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP Note that rclone runs a webserver on your local machine to collect the -token as returned from premiumize.me. This only runs from the moment it opens -your browser to the moment you get back the verification code. This -is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require you to unblock -it temporarily if you are running a host firewall. - -Once configured you can then use \[ga]rclone\[ga] like this, - +token as returned from premiumize.me. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP List directories in top level of your premiumize.me - - rclone lsd remote: - +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP List all the files in your premiumize.me - - rclone ls remote: - +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP To copy a local directory to an premiumize.me directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP premiumize.me does not support modification times or hashes, therefore -syncing will default to \[ga]--size-only\[ga] checking. Note that using -\[ga]--update\[ga] will work. - -### Restricted filename characters - -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) -the following characters are also replaced: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| \[rs] | 0x5C | \[uFF3C] | -| \[dq] | 0x22 | \[uFF02] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - +syncing will default to \f[C]--size-only\f[R] checking. +Note that using \f[C]--update\f[R] will work. +.SS Restricted filename characters +.PP +In addition to the default restricted characters +set (https://rclone.org/overview/#restricted-characters) the following +characters are also replaced: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +T{ +\[dq] +T}@T{ +0x22 +T}@T{ +\[uFF02] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options +.PP Here are the Standard options specific to premiumizeme (premiumize.me). - -#### --premiumizeme-client-id - +.SS --premiumizeme-client-id +.PP OAuth Client Id. - +.PP Leave blank normally. - +.PP Properties: - -- Config: client_id -- Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID -- Type: string -- Required: false - -#### --premiumizeme-client-secret - +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-client-secret +.PP OAuth Client Secret. - +.PP Leave blank normally. - +.PP Properties: - -- Config: client_secret -- Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET -- Type: string -- Required: false - -#### --premiumizeme-api-key - +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-api-key +.PP API Key. - +.PP This is not normally used - use oauth instead. - - +.PP Properties: - -- Config: api_key -- Env Var: RCLONE_PREMIUMIZEME_API_KEY -- Type: string -- Required: false - -### Advanced options - +.IP \[bu] 2 +Config: api_key +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_API_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP Here are the Advanced options specific to premiumizeme (premiumize.me). - -#### --premiumizeme-token - +.SS --premiumizeme-token +.PP OAuth Access Token as a JSON blob. - +.PP Properties: - -- Config: token -- Env Var: RCLONE_PREMIUMIZEME_TOKEN -- Type: string -- Required: false - -#### --premiumizeme-auth-url - +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-auth-url +.PP Auth server URL. - +.PP Leave blank to use the provider defaults. - +.PP Properties: - -- Config: auth_url -- Env Var: RCLONE_PREMIUMIZEME_AUTH_URL -- Type: string -- Required: false - -#### --premiumizeme-token-url - +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-token-url +.PP Token server url. - +.PP Leave blank to use the provider defaults. - +.PP Properties: - -- Config: token_url -- Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL -- Type: string -- Required: false - -#### --premiumizeme-encoding - +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_PREMIUMIZEME_ENCODING -- Type: Encoding -- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot - - - -## Limitations - -Note that premiumize.me is case insensitive so you can\[aq]t have a file called -\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. - -premiumize.me file names can\[aq]t have the \[ga]\[rs]\[ga] or \[ga]\[dq]\[ga] characters in. +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot +.SS --premiumizeme-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +Note that premiumize.me is case insensitive so you can\[aq]t have a file +called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. +.PP +premiumize.me file names can\[aq]t have the \f[C]\[rs]\f[R] or +\f[C]\[dq]\f[R] characters in. rclone maps these to and from an identical looking unicode equivalents -\[ga]\[uFF3C]\[ga] and \[ga]\[uFF02]\[ga] - +\f[C]\[uFF3C]\f[R] and \f[C]\[uFF02]\f[R] +.PP premiumize.me only supports filenames up to 255 characters in length. - -# Proton Drive - -[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault - for your files that protects your data. - -This is an rclone backend for Proton Drive which supports the file transfer -features of Proton Drive using the same client-side encryption. - -Due to the fact that Proton Drive doesn\[aq]t publish its API documentation, this -backend is implemented with best efforts by reading the open-sourced client -source code and observing the Proton Drive traffic in the browser. - -**NB** This backend is currently in Beta. It is believed to be correct -and all the integration tests pass. However the Proton Drive protocol -has evolved over time there may be accounts it is not compatible -with. Please [post on the rclone forum](https://forum.rclone.org/) if -you find an incompatibility. - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configurations - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi +.SH Proton Drive .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / Proton -Drive \ \[dq]Proton Drive\[dq] [snip] Storage> protondrive User name -user> you\[at]protonmail.com Password. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank y/g/n> y Enter the password: password: -Confirm the password: password: Option 2fa. -2FA code (if the account requires one) Enter a value. -Press Enter to leave empty. -2fa> 123456 Remote config -------------------- [remote] type = -protondrive user = you\[at]protonmail.com pass = *** ENCRYPTED *** --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +Proton Drive (https://proton.me/drive) is an end-to-end encrypted Swiss +vault for your files that protects your data. +.PP +This is an rclone backend for Proton Drive which supports the file +transfer features of Proton Drive using the same client-side encryption. +.PP +Due to the fact that Proton Drive doesn\[aq]t publish its API +documentation, this backend is implemented with best efforts by reading +the open-sourced client source code and observing the Proton Drive +traffic in the browser. +.PP +\f[B]NB\f[R] This backend is currently in Beta. +It is believed to be correct and all the integration tests pass. +However the Proton Drive protocol has evolved over time there may be +accounts it is not compatible with. +Please post on the rclone forum (https://forum.rclone.org/) if you find +an incompatibility. +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configurations +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: .IP .nf \f[C] -**NOTE:** The Proton Drive encryption keys need to have been already generated -after a regular login via the browser, otherwise attempting to use the -credentials in \[ga]rclone\[ga] will fail. - -Once configured you can then use \[ga]rclone\[ga] like this, - + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Proton Drive + \[rs] \[dq]Proton Drive\[dq] +[snip] +Storage> protondrive +User name +user> you\[at]protonmail.com +Password. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Option 2fa. +2FA code (if the account requires one) +Enter a value. Press Enter to leave empty. +2fa> 123456 +Remote config +-------------------- +[remote] +type = protondrive +user = you\[at]protonmail.com +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +\f[B]NOTE:\f[R] The Proton Drive encryption keys need to have been +already generated after a regular login via the browser, otherwise +attempting to use the credentials in \f[C]rclone\f[R] will fail. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP List directories in top level of your Proton Drive - - rclone lsd remote: - +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP List all the files in your Proton Drive - - rclone ls remote: - +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP To copy a local directory to an Proton Drive directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP Proton Drive Bridge does not support updating modification times yet. - +.PP The SHA1 hash algorithm is supported. - -### Restricted filename characters - -Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and -right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) - -### Duplicated files - -Proton Drive can not have two files with exactly the same name and path. If the -conflict occurs, depending on the advanced config, the file might or might not -be overwritten. - -### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) - +.SS Restricted filename characters +.PP +Invalid UTF-8 bytes will be +replaced (https://rclone.org/overview/#invalid-utf8), also left and +right spaces will be removed (code +reference (https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) +.SS Duplicated files +.PP +Proton Drive can not have two files with exactly the same name and path. +If the conflict occurs, depending on the advanced config, the file might +or might not be overwritten. +.SS Mailbox password (https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) +.PP Please set your mailbox password in the advanced config section. - -### Caching - -The cache is currently built for the case when the rclone is the only instance -performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won\[cq]t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, +.SS Caching +.PP +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. +The event system, which is the proton API system that provides +visibility of what has changed on the drive, is yet to be implemented, +so updates from other clients won\[cq]t be reflected in the cache. +Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data. - - -### Standard options - +.SS Standard options +.PP Here are the Standard options specific to protondrive (Proton Drive). - -#### --protondrive-username - +.SS --protondrive-username +.PP The username of your proton account - +.PP Properties: - -- Config: username -- Env Var: RCLONE_PROTONDRIVE_USERNAME -- Type: string -- Required: true - -#### --protondrive-password - +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --protondrive-password +.PP The password of your proton account. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: password -- Env Var: RCLONE_PROTONDRIVE_PASSWORD -- Type: string -- Required: true - -#### --protondrive-2fa - +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --protondrive-2fa +.PP The 2FA code - +.PP The value can also be provided with --protondrive-2fa=000000 - -The 2FA code of your proton drive account if the account is set up with +.PP +The 2FA code of your proton drive account if the account is set up with two-factor authentication - +.PP Properties: - -- Config: 2fa -- Env Var: RCLONE_PROTONDRIVE_2FA -- Type: string -- Required: false - -### Advanced options - +.IP \[bu] 2 +Config: 2fa +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_2FA +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP Here are the Advanced options specific to protondrive (Proton Drive). - -#### --protondrive-mailbox-password - +.SS --protondrive-mailbox-password +.PP The mailbox password of your two-password proton account. - -For more information regarding the mailbox password, please check the -following official knowledge base article: +.PP +For more information regarding the mailbox password, please check the +following official knowledge base article: https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password - - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: mailbox_password -- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD -- Type: string -- Required: false - -#### --protondrive-client-uid - +.IP \[bu] 2 +Config: mailbox_password +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-uid +.PP Client uid key (internal use only) - +.PP Properties: - -- Config: client_uid -- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID -- Type: string -- Required: false - -#### --protondrive-client-access-token - +.IP \[bu] 2 +Config: client_uid +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-access-token +.PP Client access token key (internal use only) - +.PP Properties: - -- Config: client_access_token -- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN -- Type: string -- Required: false - -#### --protondrive-client-refresh-token - +.IP \[bu] 2 +Config: client_access_token +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-refresh-token +.PP Client refresh token key (internal use only) - +.PP Properties: - -- Config: client_refresh_token -- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN -- Type: string -- Required: false - -#### --protondrive-client-salted-key-pass - +.IP \[bu] 2 +Config: client_refresh_token +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-salted-key-pass +.PP Client salted key pass key (internal use only) - +.PP Properties: - -- Config: client_salted_key_pass -- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS -- Type: string -- Required: false - -#### --protondrive-encoding - +.IP \[bu] 2 +Config: client_salted_key_pass +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_PROTONDRIVE_ENCODING -- Type: Encoding -- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - -#### --protondrive-original-file-size - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot +.SS --protondrive-original-file-size +.PP Return the file size before encryption - -The size of the encrypted file will be different from (bigger than) the -original file size. Unless there is a reason to return the file size -after encryption is performed, otherwise, set this option to true, as -features like Open() which will need to be supplied with original content -size, will fail to operate properly - +.PP +The size of the encrypted file will be different from (bigger than) the +original file size. +Unless there is a reason to return the file size after encryption is +performed, otherwise, set this option to true, as features like Open() +which will need to be supplied with original content size, will fail to +operate properly +.PP Properties: - -- Config: original_file_size -- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE -- Type: bool -- Default: true - -#### --protondrive-app-version - -The app version string - -The app version string indicates the client that is currently performing -the API request. This information is required and will be sent with every -API request. - +.IP \[bu] 2 +Config: original_file_size +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --protondrive-app-version +.PP +The app version string +.PP +The app version string indicates the client that is currently performing +the API request. +This information is required and will be sent with every API request. +.PP Properties: - -- Config: app_version -- Env Var: RCLONE_PROTONDRIVE_APP_VERSION -- Type: string -- Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] - -#### --protondrive-replace-existing-draft - +.IP \[bu] 2 +Config: app_version +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_APP_VERSION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] +.SS --protondrive-replace-existing-draft +.PP Create a new revision when filename conflict is detected - -When a file upload is cancelled or failed before completion, a draft will be -created and the subsequent upload of the same file to the same location will be -reported as a conflict. - +.PP +When a file upload is cancelled or failed before completion, a draft +will be created and the subsequent upload of the same file to the same +location will be reported as a conflict. +.PP The value can also be set by --protondrive-replace-existing-draft=true - -If the option is set to true, the draft will be replaced and then the upload -operation will restart. If there are other clients also uploading at the same -file location at the same time, the behavior is currently unknown. Need to set -to true for integration tests. -If the option is set to false, an error \[dq]a draft exist - usually this means a -file is being uploaded at another client, or, there was a failed upload attempt\[dq] -will be returned, and no upload will happen. - +.PP +If the option is set to true, the draft will be replaced and then the +upload operation will restart. +If there are other clients also uploading at the same file location at +the same time, the behavior is currently unknown. +Need to set to true for integration tests. +If the option is set to false, an error \[dq]a draft exist - usually +this means a file is being uploaded at another client, or, there was a +failed upload attempt\[dq] will be returned, and no upload will happen. +.PP Properties: - -- Config: replace_existing_draft -- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT -- Type: bool -- Default: false - -#### --protondrive-enable-caching - +.IP \[bu] 2 +Config: replace_existing_draft +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --protondrive-enable-caching +.PP Caches the files and folders metadata to reduce API calls - -Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, -as the current implementation doesn\[aq]t update or clear the cache when there are -external changes. - -The files and folders on ProtonDrive are represented as links with keyrings, -which can be cached to improve performance and be friendly to the API server. - -The cache is currently built for the case when the rclone is the only instance -performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won\[cq]t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, +.PP +Notice: If you are mounting ProtonDrive as a VFS, please disable this +feature, as the current implementation doesn\[aq]t update or clear the +cache when there are external changes. +.PP +The files and folders on ProtonDrive are represented as links with +keyrings, which can be cached to improve performance and be friendly to +the API server. +.PP +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. +The event system, which is the proton API system that provides +visibility of what has changed on the drive, is yet to be implemented, +so updates from other clients won\[cq]t be reflected in the cache. +Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data. - +.PP Properties: - -- Config: enable_caching -- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING -- Type: bool -- Default: true - - - -## Limitations - -This backend uses the -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a -fork of the [official repo](https://github.com/ProtonMail/go-proton-api). - -There is no official API documentation available from Proton Drive. But, thanks -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) -and the web, iOS, and Android client codebases, we don\[aq]t need to completely -reverse engineer the APIs by observing the web client traffic! - -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic -building blocks of API calls and error handling, such as 429 exponential -back-off, but it is pretty much just a barebone interface to the Proton API. -For example, the encryption and decryption of the Proton Drive file are not -provided in this library. - -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on -top of this quickly. This codebase handles the intricate tasks before and after -calling Proton APIs, particularly the complex encryption scheme, allowing -developers to implement features for other software on top of this codebase. -There are likely quite a few errors in this library, as there isn\[aq]t official -documentation available. - -# put.io - -Paths are specified as \[ga]remote:path\[ga] - +.IP \[bu] 2 +Config: enable_caching +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --protondrive-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +This backend uses the +Proton-API-Bridge (https://github.com/henrybear327/Proton-API-Bridge), +which is based on +go-proton-api (https://github.com/henrybear327/go-proton-api), a fork of +the official repo (https://github.com/ProtonMail/go-proton-api). +.PP +There is no official API documentation available from Proton Drive. +But, thanks to Proton open sourcing +proton-go-api (https://github.com/ProtonMail/go-proton-api) and the web, +iOS, and Android client codebases, we don\[aq]t need to completely +reverse engineer the APIs by observing the web client traffic! +.PP +proton-go-api (https://github.com/ProtonMail/go-proton-api) provides the +basic building blocks of API calls and error handling, such as 429 +exponential back-off, but it is pretty much just a barebone interface to +the Proton API. +For example, the encryption and decryption of the Proton Drive file are +not provided in this library. +.PP +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be +built on top of this quickly. +This codebase handles the intricate tasks before and after calling +Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this +codebase. +There are likely quite a few errors in this library, as there isn\[aq]t +official documentation available. +.SH put.io +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP put.io paths may be as deep as required, e.g. -\[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -The initial setup for put.io involves getting a token from put.io -which you need to do in your browser. \[ga]rclone config\[ga] walks you -through it. - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> putio Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value [snip] XX / Put.io -\ \[dq]putio\[dq] [snip] Storage> putio ** See help for putio backend -at: https://rclone.org/putio/ ** +The initial setup for put.io involves getting a token from put.io which +you need to do in your browser. +\f[C]rclone config\f[R] walks you through it. .PP -Remote config Use web browser to automatically authenticate rclone with -remote? -* Say Y if the machine running rclone has a web browser you can use * -Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. -If Y failed, try N. -y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to -the following link: http://127.0.0.1:53682/auth Log in and authorize -rclone for access Waiting for code... -Got code -------------------- [putio] type = putio token = -{\[dq]access_token\[dq]:\[dq]XXXXXXXX\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y Current remotes: -.PP -Name Type ==== ==== putio putio -.IP "e)" 3 -Edit existing remote -.IP "f)" 3 -New remote -.IP "g)" 3 -Delete remote -.IP "h)" 3 -Rename remote -.IP "i)" 3 -Copy remote -.IP "j)" 3 -Set configuration password -.IP "k)" 3 -Quit config e/n/d/r/c/s/q> q +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: .IP .nf \f[C] -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> putio +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +[snip] +XX / Put.io + \[rs] \[dq]putio\[dq] +[snip] +Storage> putio +** See help for putio backend at: https://rclone.org/putio/ ** +Remote config +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. +y) Yes +n) No +y/n> y +If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[putio] +type = putio +token = {\[dq]access_token\[dq]:\[dq]XXXXXXXX\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +putio putio + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> q +\f[R] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP Note that rclone runs a webserver on your local machine to collect the -token as returned from put.io if using web browser to automatically -authenticate. This only -runs from the moment it opens your browser to the moment you get back -the verification code. This is on \[ga]http://127.0.0.1:53682/\[ga] and this -it may require you to unblock it temporarily if you are running a host -firewall, or use manual mode. - -You can then use it like this, - -List directories in top level of your put.io - - rclone lsd remote: - -List all the files in your put.io - - rclone ls remote: - -To copy a local directory to a put.io directory called backup - - rclone copy /home/source remote:backup - -### Restricted filename characters - -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) -the following characters are also replaced: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| \[rs] | 0x5C | \[uFF3C] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - -Here are the Standard options specific to putio (Put.io). - -#### --putio-client-id - -OAuth Client Id. - -Leave blank normally. - -Properties: - -- Config: client_id -- Env Var: RCLONE_PUTIO_CLIENT_ID -- Type: string -- Required: false - -#### --putio-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_PUTIO_CLIENT_SECRET -- Type: string -- Required: false - -### Advanced options - -Here are the Advanced options specific to putio (Put.io). - -#### --putio-token - -OAuth Access Token as a JSON blob. - -Properties: - -- Config: token -- Env Var: RCLONE_PUTIO_TOKEN -- Type: string -- Required: false - -#### --putio-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_PUTIO_AUTH_URL -- Type: string -- Required: false - -#### --putio-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_PUTIO_TOKEN_URL -- Type: string -- Required: false - -#### --putio-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_PUTIO_ENCODING -- Type: Encoding -- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - - - -## Limitations - -put.io has rate limiting. When you hit a limit, rclone automatically -retries after waiting the amount of time requested by the server. - -If you want to avoid ever hitting these limits, you may use the -\[ga]--tpslimit\[ga] flag with a low number. Note that the imposed limits -may be different for different operations, and may change over time. - -# Proton Drive - -[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault - for your files that protects your data. - -This is an rclone backend for Proton Drive which supports the file transfer -features of Proton Drive using the same client-side encryption. - -Due to the fact that Proton Drive doesn\[aq]t publish its API documentation, this -backend is implemented with best efforts by reading the open-sourced client -source code and observing the Proton Drive traffic in the browser. - -**NB** This backend is currently in Beta. It is believed to be correct -and all the integration tests pass. However the Proton Drive protocol -has evolved over time there may be accounts it is not compatible -with. Please [post on the rclone forum](https://forum.rclone.org/) if -you find an incompatibility. - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configurations - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi +token as returned from put.io if using web browser to automatically +authenticate. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you +to unblock it temporarily if you are running a host firewall, or use +manual mode. .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / Proton -Drive \ \[dq]Proton Drive\[dq] [snip] Storage> protondrive User name -user> you\[at]protonmail.com Password. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank y/g/n> y Enter the password: password: -Confirm the password: password: Option 2fa. -2FA code (if the account requires one) Enter a value. -Press Enter to leave empty. -2fa> 123456 Remote config -------------------- [remote] type = -protondrive user = you\[at]protonmail.com pass = *** ENCRYPTED *** --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +You can then use it like this, +.PP +List directories in top level of your put.io .IP .nf \f[C] -**NOTE:** The Proton Drive encryption keys need to have been already generated -after a regular login via the browser, otherwise attempting to use the -credentials in \[ga]rclone\[ga] will fail. - -Once configured you can then use \[ga]rclone\[ga] like this, - -List directories in top level of your Proton Drive - - rclone lsd remote: - -List all the files in your Proton Drive - - rclone ls remote: - -To copy a local directory to an Proton Drive directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - -Proton Drive Bridge does not support updating modification times yet. - -The SHA1 hash algorithm is supported. - -### Restricted filename characters - -Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and -right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) - -### Duplicated files - -Proton Drive can not have two files with exactly the same name and path. If the -conflict occurs, depending on the advanced config, the file might or might not -be overwritten. - -### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) - -Please set your mailbox password in the advanced config section. - -### Caching - -The cache is currently built for the case when the rclone is the only instance -performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won\[cq]t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, -then we might have a problem with caching the stale data. - - -### Standard options - -Here are the Standard options specific to protondrive (Proton Drive). - -#### --protondrive-username - -The username of your proton account - +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your put.io +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to a put.io directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Restricted filename characters +.PP +In addition to the default restricted characters +set (https://rclone.org/overview/#restricted-characters) the following +characters are also replaced: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options +.PP +Here are the Standard options specific to putio (Put.io). +.SS --putio-client-id +.PP +OAuth Client Id. +.PP +Leave blank normally. +.PP Properties: - -- Config: username -- Env Var: RCLONE_PROTONDRIVE_USERNAME -- Type: string -- Required: true - -#### --protondrive-password - -The password of your proton account. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --putio-client-secret +.PP +OAuth Client Secret. +.PP +Leave blank normally. +.PP Properties: - -- Config: password -- Env Var: RCLONE_PROTONDRIVE_PASSWORD -- Type: string -- Required: true - -#### --protondrive-2fa - -The 2FA code - -The value can also be provided with --protondrive-2fa=000000 - -The 2FA code of your proton drive account if the account is set up with -two-factor authentication - +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to putio (Put.io). +.SS --putio-token +.PP +OAuth Access Token as a JSON blob. +.PP Properties: - -- Config: 2fa -- Env Var: RCLONE_PROTONDRIVE_2FA -- Type: string -- Required: false - -### Advanced options - -Here are the Advanced options specific to protondrive (Proton Drive). - -#### --protondrive-mailbox-password - -The mailbox password of your two-password proton account. - -For more information regarding the mailbox password, please check the -following official knowledge base article: -https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password - - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --putio-auth-url +.PP +Auth server URL. +.PP +Leave blank to use the provider defaults. +.PP Properties: - -- Config: mailbox_password -- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD -- Type: string -- Required: false - -#### --protondrive-client-uid - -Client uid key (internal use only) - +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --putio-token-url +.PP +Token server url. +.PP +Leave blank to use the provider defaults. +.PP Properties: - -- Config: client_uid -- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID -- Type: string -- Required: false - -#### --protondrive-client-access-token - -Client access token key (internal use only) - -Properties: - -- Config: client_access_token -- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN -- Type: string -- Required: false - -#### --protondrive-client-refresh-token - -Client refresh token key (internal use only) - -Properties: - -- Config: client_refresh_token -- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN -- Type: string -- Required: false - -#### --protondrive-client-salted-key-pass - -Client salted key pass key (internal use only) - -Properties: - -- Config: client_salted_key_pass -- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS -- Type: string -- Required: false - -#### --protondrive-encoding - +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --putio-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_PROTONDRIVE_ENCODING -- Type: Encoding -- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - -#### --protondrive-original-file-size - -Return the file size before encryption - -The size of the encrypted file will be different from (bigger than) the -original file size. Unless there is a reason to return the file size -after encryption is performed, otherwise, set this option to true, as -features like Open() which will need to be supplied with original content -size, will fail to operate properly - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +.SS --putio-description +.PP +Description of the remote +.PP Properties: - -- Config: original_file_size -- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE -- Type: bool -- Default: true - -#### --protondrive-app-version - -The app version string - -The app version string indicates the client that is currently performing -the API request. This information is required and will be sent with every -API request. - -Properties: - -- Config: app_version -- Env Var: RCLONE_PROTONDRIVE_APP_VERSION -- Type: string -- Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] - -#### --protondrive-replace-existing-draft - -Create a new revision when filename conflict is detected - -When a file upload is cancelled or failed before completion, a draft will be -created and the subsequent upload of the same file to the same location will be -reported as a conflict. - -The value can also be set by --protondrive-replace-existing-draft=true - -If the option is set to true, the draft will be replaced and then the upload -operation will restart. If there are other clients also uploading at the same -file location at the same time, the behavior is currently unknown. Need to set -to true for integration tests. -If the option is set to false, an error \[dq]a draft exist - usually this means a -file is being uploaded at another client, or, there was a failed upload attempt\[dq] -will be returned, and no upload will happen. - -Properties: - -- Config: replace_existing_draft -- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT -- Type: bool -- Default: false - -#### --protondrive-enable-caching - -Caches the files and folders metadata to reduce API calls - -Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, -as the current implementation doesn\[aq]t update or clear the cache when there are -external changes. - -The files and folders on ProtonDrive are represented as links with keyrings, -which can be cached to improve performance and be friendly to the API server. - -The cache is currently built for the case when the rclone is the only instance -performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won\[cq]t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +put.io has rate limiting. +When you hit a limit, rclone automatically retries after waiting the +amount of time requested by the server. +.PP +If you want to avoid ever hitting these limits, you may use the +\f[C]--tpslimit\f[R] flag with a low number. +Note that the imposed limits may be different for different operations, +and may change over time. +.SH Proton Drive +.PP +Proton Drive (https://proton.me/drive) is an end-to-end encrypted Swiss +vault for your files that protects your data. +.PP +This is an rclone backend for Proton Drive which supports the file +transfer features of Proton Drive using the same client-side encryption. +.PP +Due to the fact that Proton Drive doesn\[aq]t publish its API +documentation, this backend is implemented with best efforts by reading +the open-sourced client source code and observing the Proton Drive +traffic in the browser. +.PP +\f[B]NB\f[R] This backend is currently in Beta. +It is believed to be correct and all the integration tests pass. +However the Proton Drive protocol has evolved over time there may be +accounts it is not compatible with. +Please post on the rclone forum (https://forum.rclone.org/) if you find +an incompatibility. +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configurations +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Proton Drive + \[rs] \[dq]Proton Drive\[dq] +[snip] +Storage> protondrive +User name +user> you\[at]protonmail.com +Password. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Option 2fa. +2FA code (if the account requires one) +Enter a value. Press Enter to leave empty. +2fa> 123456 +Remote config +-------------------- +[remote] +type = protondrive +user = you\[at]protonmail.com +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +\f[B]NOTE:\f[R] The Proton Drive encryption keys need to have been +already generated after a regular login via the browser, otherwise +attempting to use the credentials in \f[C]rclone\f[R] will fail. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP +List directories in top level of your Proton Drive +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your Proton Drive +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to an Proton Drive directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP +Proton Drive Bridge does not support updating modification times yet. +.PP +The SHA1 hash algorithm is supported. +.SS Restricted filename characters +.PP +Invalid UTF-8 bytes will be +replaced (https://rclone.org/overview/#invalid-utf8), also left and +right spaces will be removed (code +reference (https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) +.SS Duplicated files +.PP +Proton Drive can not have two files with exactly the same name and path. +If the conflict occurs, depending on the advanced config, the file might +or might not be overwritten. +.SS Mailbox password (https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) +.PP +Please set your mailbox password in the advanced config section. +.SS Caching +.PP +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. +The event system, which is the proton API system that provides +visibility of what has changed on the drive, is yet to be implemented, +so updates from other clients won\[cq]t be reflected in the cache. +Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data. - +.SS Standard options +.PP +Here are the Standard options specific to protondrive (Proton Drive). +.SS --protondrive-username +.PP +The username of your proton account +.PP Properties: - -- Config: enable_caching -- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING -- Type: bool -- Default: true - - - -## Limitations - -This backend uses the -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a -fork of the [official repo](https://github.com/ProtonMail/go-proton-api). - -There is no official API documentation available from Proton Drive. But, thanks -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) -and the web, iOS, and Android client codebases, we don\[aq]t need to completely -reverse engineer the APIs by observing the web client traffic! - -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic -building blocks of API calls and error handling, such as 429 exponential -back-off, but it is pretty much just a barebone interface to the Proton API. -For example, the encryption and decryption of the Proton Drive file are not -provided in this library. - -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on -top of this quickly. This codebase handles the intricate tasks before and after -calling Proton APIs, particularly the complex encryption scheme, allowing -developers to implement features for other software on top of this codebase. -There are likely quite a few errors in this library, as there isn\[aq]t official -documentation available. - -# Seafile - -This is a backend for the [Seafile](https://www.seafile.com/) storage service: -- It works with both the free community edition or the professional edition. +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --protondrive-password +.PP +The password of your proton account. +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --protondrive-2fa +.PP +The 2FA code +.PP +The value can also be provided with --protondrive-2fa=000000 +.PP +The 2FA code of your proton drive account if the account is set up with +two-factor authentication +.PP +Properties: +.IP \[bu] 2 +Config: 2fa +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_2FA +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to protondrive (Proton Drive). +.SS --protondrive-mailbox-password +.PP +The mailbox password of your two-password proton account. +.PP +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: mailbox_password +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-uid +.PP +Client uid key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: client_uid +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-access-token +.PP +Client access token key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: client_access_token +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-refresh-token +.PP +Client refresh token key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: client_refresh_token +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-salted-key-pass +.PP +Client salted key pass key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: client_salted_key_pass +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot +.SS --protondrive-original-file-size +.PP +Return the file size before encryption +.PP +The size of the encrypted file will be different from (bigger than) the +original file size. +Unless there is a reason to return the file size after encryption is +performed, otherwise, set this option to true, as features like Open() +which will need to be supplied with original content size, will fail to +operate properly +.PP +Properties: +.IP \[bu] 2 +Config: original_file_size +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --protondrive-app-version +.PP +The app version string +.PP +The app version string indicates the client that is currently performing +the API request. +This information is required and will be sent with every API request. +.PP +Properties: +.IP \[bu] 2 +Config: app_version +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_APP_VERSION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] +.SS --protondrive-replace-existing-draft +.PP +Create a new revision when filename conflict is detected +.PP +When a file upload is cancelled or failed before completion, a draft +will be created and the subsequent upload of the same file to the same +location will be reported as a conflict. +.PP +The value can also be set by --protondrive-replace-existing-draft=true +.PP +If the option is set to true, the draft will be replaced and then the +upload operation will restart. +If there are other clients also uploading at the same file location at +the same time, the behavior is currently unknown. +Need to set to true for integration tests. +If the option is set to false, an error \[dq]a draft exist - usually +this means a file is being uploaded at another client, or, there was a +failed upload attempt\[dq] will be returned, and no upload will happen. +.PP +Properties: +.IP \[bu] 2 +Config: replace_existing_draft +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --protondrive-enable-caching +.PP +Caches the files and folders metadata to reduce API calls +.PP +Notice: If you are mounting ProtonDrive as a VFS, please disable this +feature, as the current implementation doesn\[aq]t update or clear the +cache when there are external changes. +.PP +The files and folders on ProtonDrive are represented as links with +keyrings, which can be cached to improve performance and be friendly to +the API server. +.PP +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. +The event system, which is the proton API system that provides +visibility of what has changed on the drive, is yet to be implemented, +so updates from other clients won\[cq]t be reflected in the cache. +Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. +.PP +Properties: +.IP \[bu] 2 +Config: enable_caching +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --protondrive-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +This backend uses the +Proton-API-Bridge (https://github.com/henrybear327/Proton-API-Bridge), +which is based on +go-proton-api (https://github.com/henrybear327/go-proton-api), a fork of +the official repo (https://github.com/ProtonMail/go-proton-api). +.PP +There is no official API documentation available from Proton Drive. +But, thanks to Proton open sourcing +proton-go-api (https://github.com/ProtonMail/go-proton-api) and the web, +iOS, and Android client codebases, we don\[aq]t need to completely +reverse engineer the APIs by observing the web client traffic! +.PP +proton-go-api (https://github.com/ProtonMail/go-proton-api) provides the +basic building blocks of API calls and error handling, such as 429 +exponential back-off, but it is pretty much just a barebone interface to +the Proton API. +For example, the encryption and decryption of the Proton Drive file are +not provided in this library. +.PP +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be +built on top of this quickly. +This codebase handles the intricate tasks before and after calling +Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this +codebase. +There are likely quite a few errors in this library, as there isn\[aq]t +official documentation available. +.SH Seafile +.PP +This is a backend for the Seafile (https://www.seafile.com/) storage +service: - It works with both the free community edition or the +professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. -- It supports 2FA enabled users -- Using a Library API Token is **not** supported - -## Configuration - -There are two distinct modes you can setup your remote: -- you point your remote to the **root of the server**, meaning you don\[aq]t specify a library during the configuration: -Paths are specified as \[ga]remote:library\[ga]. You may put subdirectories in too, e.g. \[ga]remote:library/path/to/dir\[ga]. +- It supports 2FA enabled users - Using a Library API Token is +\f[B]not\f[R] supported +.SS Configuration +.PP +There are two distinct modes you can setup your remote: - you point your +remote to the \f[B]root of the server\f[R], meaning you don\[aq]t +specify a library during the configuration: Paths are specified as +\f[C]remote:library\f[R]. +You may put subdirectories in too, e.g. +\f[C]remote:library/path/to/dir\f[R]. - you point your remote to a specific library during the configuration: -Paths are specified as \[ga]remote:path/to/dir\[ga]. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) - -### Configuration in root mode - -Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run - - rclone config - -This will guide you through an interactive setup process. To authenticate -you will need the URL of your server, your email (or username) and your password. -\f[R] -.fi +Paths are specified as \f[C]remote:path/to/dir\f[R]. +\f[B]This is the recommended mode when using encrypted libraries\f[R]. +(\f[I]This mode is possibly slightly faster than the root mode\f[R]) +.SS Configuration in root mode .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> seafile Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value [snip] XX / -Seafile \ \[dq]seafile\[dq] [snip] Storage> seafile ** See help for -seafile backend at: https://rclone.org/seafile/ ** -.PP -URL of seafile host to connect to Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value 1 / Connect to -cloud.seafile.com \ \[dq]https://cloud.seafile.com/\[dq] url> -http://my.seafile.server/ User name (usually email address) Enter a -string value. -Press Enter for the default (\[dq]\[dq]). -user> me\[at]example.com Password y) Yes type in my own password g) -Generate random password n) No leave this optional password blank -(default) y/g> y Enter the password: password: Confirm the password: -password: Two-factor authentication (\[aq]true\[aq] if the account has -2FA enabled) Enter a boolean value (true or false). -Press Enter for the default (\[dq]false\[dq]). -2fa> false Name of the library. -Leave blank to access all non-encrypted libraries. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -library> Library password (for encrypted libraries only). -Leave blank if you pass it through the command line. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank (default) y/g/n> n Edit advanced config? -(y/n) y) Yes n) No (default) y/n> n Remote config Two-factor -authentication is not enabled on this account. --------------------- [seafile] type = seafile url = -http://my.seafile.server/ user = me\[at]example.com pass = *** ENCRYPTED -*** 2fa = false -------------------- y) Yes this is OK (default) e) Edit -this remote d) Delete this remote y/e/d> y +Here is an example of making a seafile configuration for a user with +\f[B]no\f[R] two-factor authentication. +First run .IP .nf \f[C] -This remote is called \[ga]seafile\[ga]. It\[aq]s pointing to the root of your seafile server and can now be used like this: +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +To authenticate you will need the URL of your server, your email (or +username) and your password. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> seafile +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +[snip] +XX / Seafile + \[rs] \[dq]seafile\[dq] +[snip] +Storage> seafile +** See help for seafile backend at: https://rclone.org/seafile/ ** +URL of seafile host to connect to +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value + 1 / Connect to cloud.seafile.com + \[rs] \[dq]https://cloud.seafile.com/\[dq] +url> http://my.seafile.server/ +User name (usually email address) +Enter a string value. Press Enter for the default (\[dq]\[dq]). +user> me\[at]example.com +Password +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g> y +Enter the password: +password: +Confirm the password: +password: +Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) +Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). +2fa> false +Name of the library. Leave blank to access all non-encrypted libraries. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +library> +Library password (for encrypted libraries only). Leave blank if you pass it through the command line. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g/n> n +Edit advanced config? (y/n) +y) Yes +n) No (default) +y/n> n +Remote config +Two-factor authentication is not enabled on this account. +-------------------- +[seafile] +type = seafile +url = http://my.seafile.server/ +user = me\[at]example.com +pass = *** ENCRYPTED *** +2fa = false +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This remote is called \f[C]seafile\f[R]. +It\[aq]s pointing to the root of your seafile server and can now be used +like this: +.PP See all libraries - - rclone lsd seafile: - -Create a new library - - rclone mkdir seafile:library - -List the contents of a library - - rclone ls seafile:library - -Sync \[ga]/home/local/directory\[ga] to the remote library, deleting any -excess files in the library. - - rclone sync --interactive /home/local/directory seafile:library - -### Configuration in library mode - -Here\[aq]s an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: -\f[R] -.fi -.PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> seafile Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value [snip] XX / -Seafile \ \[dq]seafile\[dq] [snip] Storage> seafile ** See help for -seafile backend at: https://rclone.org/seafile/ ** -.PP -URL of seafile host to connect to Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value 1 / Connect to -cloud.seafile.com \ \[dq]https://cloud.seafile.com/\[dq] url> -http://my.seafile.server/ User name (usually email address) Enter a -string value. -Press Enter for the default (\[dq]\[dq]). -user> me\[at]example.com Password y) Yes type in my own password g) -Generate random password n) No leave this optional password blank -(default) y/g> y Enter the password: password: Confirm the password: -password: Two-factor authentication (\[aq]true\[aq] if the account has -2FA enabled) Enter a boolean value (true or false). -Press Enter for the default (\[dq]false\[dq]). -2fa> true Name of the library. -Leave blank to access all non-encrypted libraries. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -library> My Library Library password (for encrypted libraries only). -Leave blank if you pass it through the command line. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank (default) y/g/n> n Edit advanced config? -(y/n) y) Yes n) No (default) y/n> n Remote config Two-factor -authentication: please enter your 2FA code 2fa code> 123456 -Authenticating... -Success! -------------------- [seafile] type = seafile url = -http://my.seafile.server/ user = me\[at]example.com pass = 2fa = true -library = My Library -------------------- y) Yes this is OK (default) e) -Edit this remote d) Delete this remote y/e/d> y .IP .nf \f[C] -You\[aq]ll notice your password is blank in the configuration. It\[aq]s because we only need the password to authenticate you once. - -You specified \[ga]My Library\[ga] during the configuration. The root of the remote is pointing at the -root of the library \[ga]My Library\[ga]: - -See all files in the library: - - rclone lsd seafile: - -Create a new directory inside the library - - rclone mkdir seafile:directory - -List the contents of a directory - - rclone ls seafile:directory - -Sync \[ga]/home/local/directory\[ga] to the remote library, deleting any +rclone lsd seafile: +\f[R] +.fi +.PP +Create a new library +.IP +.nf +\f[C] +rclone mkdir seafile:library +\f[R] +.fi +.PP +List the contents of a library +.IP +.nf +\f[C] +rclone ls seafile:library +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote library, deleting any excess files in the library. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory seafile:library +\f[R] +.fi +.SS Configuration in library mode +.PP +Here\[aq]s an example of a configuration in library mode with a user +that has the two-factor authentication enabled. +Your 2FA code will be asked at the end of the configuration, and will +attempt to authenticate you: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> seafile +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +[snip] +XX / Seafile + \[rs] \[dq]seafile\[dq] +[snip] +Storage> seafile +** See help for seafile backend at: https://rclone.org/seafile/ ** - rclone sync --interactive /home/local/directory seafile: - - -### --fast-list - -Seafile version 7+ supports \[ga]--fast-list\[ga] which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](https://rclone.org/docs/#fast-list) for more details. +URL of seafile host to connect to +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value + 1 / Connect to cloud.seafile.com + \[rs] \[dq]https://cloud.seafile.com/\[dq] +url> http://my.seafile.server/ +User name (usually email address) +Enter a string value. Press Enter for the default (\[dq]\[dq]). +user> me\[at]example.com +Password +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g> y +Enter the password: +password: +Confirm the password: +password: +Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) +Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). +2fa> true +Name of the library. Leave blank to access all non-encrypted libraries. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +library> My Library +Library password (for encrypted libraries only). Leave blank if you pass it through the command line. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g/n> n +Edit advanced config? (y/n) +y) Yes +n) No (default) +y/n> n +Remote config +Two-factor authentication: please enter your 2FA code +2fa code> 123456 +Authenticating... +Success! +-------------------- +[seafile] +type = seafile +url = http://my.seafile.server/ +user = me\[at]example.com +pass = +2fa = true +library = My Library +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +You\[aq]ll notice your password is blank in the configuration. +It\[aq]s because we only need the password to authenticate you once. +.PP +You specified \f[C]My Library\f[R] during the configuration. +The root of the remote is pointing at the root of the library +\f[C]My Library\f[R]: +.PP +See all files in the library: +.IP +.nf +\f[C] +rclone lsd seafile: +\f[R] +.fi +.PP +Create a new directory inside the library +.IP +.nf +\f[C] +rclone mkdir seafile:directory +\f[R] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone ls seafile:directory +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote library, deleting any +excess files in the library. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory seafile: +\f[R] +.fi +.SS --fast-list +.PP +Seafile version 7+ supports \f[C]--fast-list\f[R] which allows you to +use fewer transactions in exchange for more memory. +See the rclone docs (https://rclone.org/docs/#fast-list) for more +details. Please note this is not supported on seafile server version 6.x - - -### Restricted filename characters - -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) -the following characters are also replaced: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| / | 0x2F | \[uFF0F] | -| \[dq] | 0x22 | \[uFF02] | -| \[rs] | 0x5C | \[uFF3C] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - -### Seafile and rclone link - +.SS Restricted filename characters +.PP +In addition to the default restricted characters +set (https://rclone.org/overview/#restricted-characters) the following +characters are also replaced: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +/ +T}@T{ +0x2F +T}@T{ +\[uFF0F] +T} +T{ +\[dq] +T}@T{ +0x22 +T}@T{ +\[uFF02] +T} +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Seafile and rclone link +.PP Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: -\f[R] -.fi -.PP +.IP +.nf +\f[C] rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ -.IP -.nf -\f[C] +\f[R] +.fi +.PP or if run on a directory you will get: -\f[R] -.fi -.PP -rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ .IP .nf \f[C] -Please note a share link is unique for each file or directory. If you run a link command on a file/dir -that has already been shared, you will get the exact same link. - -### Compatibility - -It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions: -- 6.3.4 community edition -- 7.0.5 community edition -- 7.1.3 community edition -- 9.0.10 community edition - +rclone link seafile:dir +http://my.seafile.server/d/9ea2455f6f55478bbb0d/ +\f[R] +.fi +.PP +Please note a share link is unique for each file or directory. +If you run a link command on a file/dir that has already been shared, +you will get the exact same link. +.SS Compatibility +.PP +It has been actively developed using the seafile docker +image (https://github.com/haiwen/seafile-docker) of these versions: - +6.3.4 community edition - 7.0.5 community edition - 7.1.3 community +edition - 9.0.10 community edition +.PP Versions below 6.0 are not supported. -Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work properly. - -Each new version of \[ga]rclone\[ga] is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server. - - -### Standard options - +Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work +properly. +.PP +Each new version of \f[C]rclone\f[R] is automatically tested against the +latest docker image (https://hub.docker.com/r/seafileltd/seafile-mc/) of +the seafile community server. +.SS Standard options +.PP Here are the Standard options specific to seafile (seafile). - -#### --seafile-url - +.SS --seafile-url +.PP URL of seafile host to connect to. - +.PP Properties: - -- Config: url -- Env Var: RCLONE_SEAFILE_URL -- Type: string -- Required: true -- Examples: - - \[dq]https://cloud.seafile.com/\[dq] - - Connect to cloud.seafile.com. - -#### --seafile-user - +.IP \[bu] 2 +Config: url +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]https://cloud.seafile.com/\[dq] +.RS 2 +.IP \[bu] 2 +Connect to cloud.seafile.com. +.RE +.RE +.SS --seafile-user +.PP User name (usually email address). - +.PP Properties: - -- Config: user -- Env Var: RCLONE_SEAFILE_USER -- Type: string -- Required: true - -#### --seafile-pass - +.IP \[bu] 2 +Config: user +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_USER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --seafile-pass +.PP Password. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: pass -- Env Var: RCLONE_SEAFILE_PASS -- Type: string -- Required: false - -#### --seafile-2fa - -Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled). - +.IP \[bu] 2 +Config: pass +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --seafile-2fa +.PP +Two-factor authentication (\[aq]true\[aq] if the account has 2FA +enabled). +.PP Properties: - -- Config: 2fa -- Env Var: RCLONE_SEAFILE_2FA -- Type: bool -- Default: false - -#### --seafile-library - +.IP \[bu] 2 +Config: 2fa +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_2FA +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --seafile-library +.PP Name of the library. - +.PP Leave blank to access all non-encrypted libraries. - +.PP Properties: - -- Config: library -- Env Var: RCLONE_SEAFILE_LIBRARY -- Type: string -- Required: false - -#### --seafile-library-key - +.IP \[bu] 2 +Config: library +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_LIBRARY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --seafile-library-key +.PP Library password (for encrypted libraries only). - +.PP Leave blank if you pass it through the command line. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: library_key -- Env Var: RCLONE_SEAFILE_LIBRARY_KEY -- Type: string -- Required: false - -#### --seafile-auth-token - +.IP \[bu] 2 +Config: library_key +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_LIBRARY_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --seafile-auth-token +.PP Authentication token. - +.PP Properties: - -- Config: auth_token -- Env Var: RCLONE_SEAFILE_AUTH_TOKEN -- Type: string -- Required: false - -### Advanced options - +.IP \[bu] 2 +Config: auth_token +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_AUTH_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP Here are the Advanced options specific to seafile (seafile). - -#### --seafile-create-library - +.SS --seafile-create-library +.PP Should rclone create a library if it doesn\[aq]t exist. - +.PP Properties: - -- Config: create_library -- Env Var: RCLONE_SEAFILE_CREATE_LIBRARY -- Type: bool -- Default: false - -#### --seafile-encoding - +.IP \[bu] 2 +Config: create_library +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_CREATE_LIBRARY +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --seafile-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_SEAFILE_ENCODING -- Type: Encoding -- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 - - - -# SFTP - -SFTP is the [Secure (or SSH) File Transfer -Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 +.SS --seafile-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SH SFTP +.PP +SFTP is the Secure (or SSH) File Transfer +Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). +.PP The SFTP backend can be used with a number of different providers: - - -- Hetzner Storage Box -- rsync.net - - -SFTP runs over SSH v2 and is installed as standard with most modern -SSH installations. - -Paths are specified as \[ga]remote:path\[ga]. If the path does not begin with -a \[ga]/\[ga] it is relative to the home directory of the user. An empty path -\[ga]remote:\[ga] refers to the user\[aq]s home directory. For example, \[ga]rclone lsd remote:\[ga] -would list the home directory of the user configured in the rclone remote config -(\[ga]i.e /home/sftpuser\[ga]). However, \[ga]rclone lsd remote:/\[ga] would list the root -directory for remote machine (i.e. \[ga]/\[ga]) - -Note that some SFTP servers will need the leading / - Synology is a -good example of this. rsync.net and Hetzner, on the other hand, requires users to -OMIT the leading /. - -Note that by default rclone will try to execute shell commands on -the server, see [shell access considerations](#shell-access-considerations). - -## Configuration - -Here is an example of making an SFTP configuration. First run - - rclone config - +.IP \[bu] 2 +Hetzner Storage Box +.IP \[bu] 2 +rsync.net +.PP +SFTP runs over SSH v2 and is installed as standard with most modern SSH +installations. +.PP +Paths are specified as \f[C]remote:path\f[R]. +If the path does not begin with a \f[C]/\f[R] it is relative to the home +directory of the user. +An empty path \f[C]remote:\f[R] refers to the user\[aq]s home directory. +For example, \f[C]rclone lsd remote:\f[R] would list the home directory +of the user configured in the rclone remote config +(\f[C]i.e /home/sftpuser\f[R]). +However, \f[C]rclone lsd remote:/\f[R] would list the root directory for +remote machine (i.e. +\f[C]/\f[R]) +.PP +Note that some SFTP servers will need the leading / - Synology is a good +example of this. +rsync.net and Hetzner, on the other hand, requires users to OMIT the +leading /. +.PP +Note that by default rclone will try to execute shell commands on the +server, see shell access considerations. +.SS Configuration +.PP +Here is an example of making an SFTP configuration. +First run +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP This will guide you through an interactive setup process. -\f[R] -.fi -.PP +.IP +.nf +\f[C] No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / -SSH/SFTP \ \[dq]sftp\[dq] [snip] Storage> sftp SSH host to connect to -Choose a number from below, or type in your own value 1 / Connect to -example.com \ \[dq]example.com\[dq] host> example.com SSH username Enter -a string value. -Press Enter for the default (\[dq]$USER\[dq]). -user> sftpuser SSH port number Enter a signed integer. -Press Enter for the default (22). -port> SSH password, leave blank to use ssh-agent. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank y/g/n> n Path to unencrypted PEM-encoded -private key file, leave blank to use ssh-agent. -key_file> Remote config -------------------- [remote] host = example.com -user = sftpuser port = pass = key_file = -------------------- y) Yes -this is OK e) Edit this remote d) Delete this remote y/e/d> y -.IP -.nf -\f[C] -This remote is called \[ga]remote\[ga] and can now be used like this: - -See all directories in the home directory - - rclone lsd remote: - -See all directories in the root directory - - rclone lsd remote:/ - -Make a new directory - - rclone mkdir remote:path/to/directory - -List the contents of a directory - - rclone ls remote:path/to/directory - -Sync \[ga]/home/local/directory\[ga] to the remote directory, deleting any -excess files in the directory. - - rclone sync --interactive /home/local/directory remote:directory - -Mount the remote path \[ga]/srv/www-data/\[ga] to the local path -\[ga]/mnt/www-data\[ga] - - rclone mount remote:/srv/www-data/ /mnt/www-data - -### SSH Authentication - -The SFTP remote supports three authentication methods: - - * Password - * Key file, including certificate signed keys - * ssh-agent - -Key files should be PEM-encoded private key files. For instance \[ga]/home/$USER/.ssh/id_rsa\[ga]. -Only unencrypted OpenSSH or PEM encrypted files are supported. - -The key file can be specified in either an external file (key_file) or contained within the -rclone config file (key_pem). If using key_pem in the config file, the entry should be on a -single line with new line (\[aq]\[rs]n\[aq] or \[aq]\[rs]r\[rs]n\[aq]) separating lines. i.e. - - key_pem = -----BEGIN RSA PRIVATE KEY-----\[rs]nMaMbaIXtE\[rs]n0gAMbMbaSsd\[rs]nMbaass\[rs]n-----END RSA PRIVATE KEY----- - -This will generate it correctly for key_pem for use in the config: - - awk \[aq]{printf \[dq]%s\[rs]\[rs]n\[dq], $0}\[aq] < \[ti]/.ssh/id_rsa - -If you don\[aq]t specify \[ga]pass\[ga], \[ga]key_file\[ga], or \[ga]key_pem\[ga] or \[ga]ask_password\[ga] then -rclone will attempt to contact an ssh-agent. You can also specify \[ga]key_use_agent\[ga] -to force the usage of an ssh-agent. In this case \[ga]key_file\[ga] or \[ga]key_pem\[ga] can -also be specified to force the usage of a specific key in the ssh-agent. - -Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. - -If you set the \[ga]ask_password\[ga] option, rclone will prompt for a password when -needed and no password has been configured. - -#### Certificate-signed keys - -With traditional key-based authentication, you configure your private key only, -and the public key built into it will be used during the authentication process. - -If you have a certificate you may use it to sign your public key, creating a -separate SSH user certificate that should be used instead of the plain public key -extracted from the private key. Then you must provide the path to the -user certificate public key file in \[ga]pubkey_file\[ga]. - -Note: This is not the traditional public key paired with your private key, -typically saved as \[ga]/home/$USER/.ssh/id_rsa.pub\[ga]. Setting this path in -\[ga]pubkey_file\[ga] will not work. - -Example: +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / SSH/SFTP + \[rs] \[dq]sftp\[dq] +[snip] +Storage> sftp +SSH host to connect to +Choose a number from below, or type in your own value + 1 / Connect to example.com + \[rs] \[dq]example.com\[dq] +host> example.com +SSH username +Enter a string value. Press Enter for the default (\[dq]$USER\[dq]). +user> sftpuser +SSH port number +Enter a signed integer. Press Enter for the default (22). +port> +SSH password, leave blank to use ssh-agent. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> n +Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. +key_file> +Remote config +-------------------- +[remote] +host = example.com +user = sftpuser +port = +pass = +key_file = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y \f[R] .fi .PP -[remote] type = sftp host = example.com user = sftpuser key_file = -\[ti]/id_rsa pubkey_file = \[ti]/id_rsa-cert.pub +This remote is called \f[C]remote\f[R] and can now be used like this: +.PP +See all directories in the home directory .IP .nf \f[C] +rclone lsd remote: +\f[R] +.fi +.PP +See all directories in the root directory +.IP +.nf +\f[C] +rclone lsd remote:/ +\f[R] +.fi +.PP +Make a new directory +.IP +.nf +\f[C] +rclone mkdir remote:path/to/directory +\f[R] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone ls remote:path/to/directory +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote directory, deleting +any excess files in the directory. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory remote:directory +\f[R] +.fi +.PP +Mount the remote path \f[C]/srv/www-data/\f[R] to the local path +\f[C]/mnt/www-data\f[R] +.IP +.nf +\f[C] +rclone mount remote:/srv/www-data/ /mnt/www-data +\f[R] +.fi +.SS SSH Authentication +.PP +The SFTP remote supports three authentication methods: +.IP \[bu] 2 +Password +.IP \[bu] 2 +Key file, including certificate signed keys +.IP \[bu] 2 +ssh-agent +.PP +Key files should be PEM-encoded private key files. +For instance \f[C]/home/$USER/.ssh/id_rsa\f[R]. +Only unencrypted OpenSSH or PEM encrypted files are supported. +.PP +The key file can be specified in either an external file (key_file) or +contained within the rclone config file (key_pem). +If using key_pem in the config file, the entry should be on a single +line with new line (\[aq]\[aq] or \[aq]\[aq]) separating lines. +i.e. +.IP +.nf +\f[C] +key_pem = -----BEGIN RSA PRIVATE KEY-----\[rs]nMaMbaIXtE\[rs]n0gAMbMbaSsd\[rs]nMbaass\[rs]n-----END RSA PRIVATE KEY----- +\f[R] +.fi +.PP +This will generate it correctly for key_pem for use in the config: +.IP +.nf +\f[C] +awk \[aq]{printf \[dq]%s\[rs]\[rs]n\[dq], $0}\[aq] < \[ti]/.ssh/id_rsa +\f[R] +.fi +.PP +If you don\[aq]t specify \f[C]pass\f[R], \f[C]key_file\f[R], or +\f[C]key_pem\f[R] or \f[C]ask_password\f[R] then rclone will attempt to +contact an ssh-agent. +You can also specify \f[C]key_use_agent\f[R] to force the usage of an +ssh-agent. +In this case \f[C]key_file\f[R] or \f[C]key_pem\f[R] can also be +specified to force the usage of a specific key in the ssh-agent. +.PP +Using an ssh-agent is the only way to load encrypted OpenSSH keys at the +moment. +.PP +If you set the \f[C]ask_password\f[R] option, rclone will prompt for a +password when needed and no password has been configured. +.SS Certificate-signed keys +.PP +With traditional key-based authentication, you configure your private +key only, and the public key built into it will be used during the +authentication process. +.PP +If you have a certificate you may use it to sign your public key, +creating a separate SSH user certificate that should be used instead of +the plain public key extracted from the private key. +Then you must provide the path to the user certificate public key file +in \f[C]pubkey_file\f[R]. +.PP +Note: This is not the traditional public key paired with your private +key, typically saved as \f[C]/home/$USER/.ssh/id_rsa.pub\f[R]. +Setting this path in \f[C]pubkey_file\f[R] will not work. +.PP +Example: +.IP +.nf +\f[C] +[remote] +type = sftp +host = example.com +user = sftpuser +key_file = \[ti]/id_rsa +pubkey_file = \[ti]/id_rsa-cert.pub +\f[R] +.fi +.PP If you concatenate a cert with a private key then you can specify the merged file in both places. - -Note: the cert must come first in the file. e.g. - -\[ga]\[ga]\[ga] +.PP +Note: the cert must come first in the file. +e.g. +.IP +.nf +\f[C] cat id_rsa-cert.pub id_rsa > merged_key -\[ga]\[ga]\[ga] - -### Host key validation - -By default rclone will not check the server\[aq]s host key for validation. This -can allow an attacker to replace a server with their own and if you use -password authentication then this can lead to that password being exposed. - -Host key matching, using standard \[ga]known_hosts\[ga] files can be turned on by -enabling the \[ga]known_hosts_file\[ga] option. This can point to the file maintained -by \[ga]OpenSSH\[ga] or can point to a unique file. - -e.g. using the OpenSSH \[ga]known_hosts\[ga] file: - -\[ga]\[ga]\[ga] +\f[R] +.fi +.SS Host key validation +.PP +By default rclone will not check the server\[aq]s host key for +validation. +This can allow an attacker to replace a server with their own and if you +use password authentication then this can lead to that password being +exposed. +.PP +Host key matching, using standard \f[C]known_hosts\f[R] files can be +turned on by enabling the \f[C]known_hosts_file\f[R] option. +This can point to the file maintained by \f[C]OpenSSH\f[R] or can point +to a unique file. +.PP +e.g. +using the OpenSSH \f[C]known_hosts\f[R] file: +.IP +.nf +\f[C] [remote] type = sftp host = example.com @@ -54716,6 +59481,19 @@ Env Var: RCLONE_SFTP_COPY_IS_HARDLINK Type: bool .IP \[bu] 2 Default: false +.SS --sftp-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SFTP_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP On some SFTP servers (e.g. @@ -55041,6 +59819,19 @@ Type: Encoding .IP \[bu] 2 Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot +.SS --smb-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SMB_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SH Storj .PP Storj (https://storj.io) is an encrypted, secure, and cost-effective @@ -55435,6 +60226,23 @@ Provider: new Type: string .IP \[bu] 2 Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to storj (Storj Decentralized +Cloud Storage). +.SS --storj-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_STORJ_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Usage .PP Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for @@ -55958,6 +60766,19 @@ Env Var: RCLONE_SUGARSYNC_ENCODING Type: Encoding .IP \[bu] 2 Default: Slash,Ctl,InvalidUtf8,Dot +.SS --sugarsync-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SUGARSYNC_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP \f[C]rclone about\f[R] is not supported by the SugarSync backend. @@ -56164,6 +60985,19 @@ Type: Encoding .IP \[bu] 2 Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot +.SS --uptobox-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_UPTOBOX_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP Uptobox will delete inactive files that have not been accessed in 60 @@ -56673,6 +61507,19 @@ Env Var: RCLONE_UNION_MIN_FREE_SPACE Type: SizeSuffix .IP \[bu] 2 Default: 1Gi +.SS --union-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_UNION_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Metadata .PP Any metadata supported by the underlying remote is read and written. @@ -57023,6 +61870,32 @@ Env Var: RCLONE_WEBDAV_NEXTCLOUD_CHUNK_SIZE Type: SizeSuffix .IP \[bu] 2 Default: 10Mi +.SS --webdav-owncloud-exclude-shares +.PP +Exclude ownCloud shares +.PP +Properties: +.IP \[bu] 2 +Config: owncloud_exclude_shares +.IP \[bu] 2 +Env Var: RCLONE_WEBDAV_OWNCLOUD_EXCLUDE_SHARES +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --webdav-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_WEBDAV_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Provider notes .PP See below for notes on specific providers. @@ -57484,6 +62357,19 @@ Env Var: RCLONE_YANDEX_ENCODING Type: Encoding .IP \[bu] 2 Default: Slash,Del,Ctl,InvalidUtf8,Dot +.SS --yandex-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_YANDEX_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP When uploading very large files (bigger than about 5 GiB) you will need @@ -57803,6 +62689,19 @@ Env Var: RCLONE_ZOHO_ENCODING Type: Encoding .IP \[bu] 2 Default: Del,Ctl,InvalidUtf8 +.SS --zoho-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_ZOHO_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Setting up your own client_id .PP For Zoho we advise you to set up your own client_id. @@ -58784,6 +63683,19 @@ Env Var: RCLONE_LOCAL_ENCODING Type: Encoding .IP \[bu] 2 Default: Slash,Dot +.SS --local-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_LOCAL_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Metadata .PP Depending on which OS is in use the local backend may return only some @@ -58796,6 +63708,8 @@ pkg/attrs#47 (https://github.com/pkg/xattr/issues/47)). User metadata is stored as extended attributes (which may not be supported by all file systems) under the \[dq]user.*\[dq] prefix. .PP +Metadata is supported on files and directories. +.PP Here are the possible system metadata items for the local backend. .PP .TS @@ -58931,6 +63845,678 @@ Options: .IP \[bu] 2 \[dq]error\[dq]: return an error based on option value .SH Changelog +.SS v1.66.0 - 2024-03-10 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.65.0...v1.66.0) +.IP \[bu] 2 +Major features +.RS 2 +.IP \[bu] 2 +Rclone will now sync directory modification times if the backend +supports it. +.RS 2 +.IP \[bu] 2 +This can be disabled with +--no-update-dir-modtime (https://rclone.org/docs/#no-update-dir-modtime) +.IP \[bu] 2 +See the overview (https://rclone.org/overview/#features) and look for +the \f[C]D\f[R] flags in the \f[C]ModTime\f[R] column to see which +backends support it. +.RE +.IP \[bu] 2 +Rclone will now sync directory metadata if the backend supports it when +\f[C]-M\f[R]/\f[C]--metadata\f[R] is in use. +.RS 2 +.IP \[bu] 2 +See the overview (https://rclone.org/overview/#features) and look for +the \f[C]D\f[R] flags in the \f[C]Metadata\f[R] column to see which +backends support it. +.RE +.IP \[bu] 2 +Bisync has received many updates see below for more details or +bisync\[aq]s changelog (https://rclone.org/bisync/#changelog) +.RE +.IP \[bu] 2 +Removed backends +.RS 2 +.IP \[bu] 2 +amazonclouddrive: Remove Amazon Drive backend code and docs (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +New Features +.RS 2 +.IP \[bu] 2 +backend +.RS 2 +.IP \[bu] 2 +Add description field for all backends (Paul Stern) +.RE +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Update to go1.22 and make go1.20 the minimum required version (Nick +Craig-Wood) +.IP \[bu] 2 +Fix \f[C]CVE-2024-24786\f[R] by upgrading +\f[C]google.golang.org/protobuf\f[R] (Nick Craig-Wood) +.RE +.IP \[bu] 2 +check: Respect \f[C]--no-unicode-normalization\f[R] and +\f[C]--ignore-case-sync\f[R] for \f[C]--checkfile\f[R] (nielash) +.IP \[bu] 2 +cmd: Much improved shell auto completion which reduces the size of the +completion file and works faster (Nick Craig-Wood) +.IP \[bu] 2 +doc updates (albertony, ben-ba, Eli, emyarod, huajin tong, Jack +Provance, kapitainsky, keongalvin, Nick Craig-Wood, nielash, rarspace01, +rzitzer, Tera, Vincent Murphy) +.IP \[bu] 2 +fs: Add more detailed logging for file includes/excludes (Kyle Reynolds) +.IP \[bu] 2 +lsf +.RS 2 +.IP \[bu] 2 +Add \f[C]--time-format\f[R] flag (nielash) +.IP \[bu] 2 +Make metadata appear for directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +lsjson: Make metadata appear for directories (Nick Craig-Wood) +.IP \[bu] 2 +rc +.RS 2 +.IP \[bu] 2 +Add \f[C]srcFs\f[R] and \f[C]dstFs\f[R] to \f[C]core/stats\f[R] and +\f[C]core/transferred\f[R] stats (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]operations/hashsum\f[R] to the rc as \f[C]rclone hashsum\f[R] +equivalent (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]config/paths\f[R] to the rc as \f[C]rclone config paths\f[R] +equivalent (Nick Craig-Wood) +.RE +.IP \[bu] 2 +sync +.RS 2 +.IP \[bu] 2 +Optionally report list of synced paths to file (nielash) +.IP \[bu] 2 +Implement directory sync for mod times and metadata (Nick Craig-Wood) +.IP \[bu] 2 +Don\[aq]t set directory modtimes if already set (nielash) +.IP \[bu] 2 +Don\[aq]t sync directory modtimes from backends which don\[aq]t have +directories (Nick Craig-Wood) +.RE +.RE +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +backend +.RS 2 +.IP \[bu] 2 +Make backends which use oauth implement the \f[C]Shutdown\f[R] and +shutdown the oauth properly (rkonfj) +.RE +.IP \[bu] 2 +bisync +.RS 2 +.IP \[bu] 2 +Handle unicode and case normalization consistently (nielash) +.IP \[bu] 2 +Partial uploads known issue on +\f[C]local\f[R]/\f[C]ftp\f[R]/\f[C]sftp\f[R] has been resolved (unless +using \f[C]--inplace\f[R]) (nielash) +.IP \[bu] 2 +Fixed handling of unicode normalization and case insensitivity, support +for \f[C]--fix-case\f[R] (https://rclone.org/docs/#fix-case), +\f[C]--ignore-case-sync\f[R], \f[C]--no-unicode-normalization\f[R] +(nielash) +.IP \[bu] 2 +Bisync no longer fails to find the correct listing file when configs are +overridden with backend-specific flags. +(nielash) +.RE +.IP \[bu] 2 +nfsmount +.RS 2 +.IP \[bu] 2 +Fix exit after external unmount (nielash) +.IP \[bu] 2 +Fix \f[C]--volname\f[R] being ignored (nielash) +.RE +.IP \[bu] 2 +operations +.RS 2 +.IP \[bu] 2 +Fix renaming a file on macOS (nielash) +.IP \[bu] 2 +Fix case-insensitive moves in operations.Move (nielash) +.IP \[bu] 2 +Fix TestCaseInsensitiveMoveFileDryRun on chunker integration tests +(nielash) +.IP \[bu] 2 +Fix TestMkdirModTime test (Nick Craig-Wood) +.IP \[bu] 2 +Fix TestSetDirModTime for backends with SetDirModTime but not Metadata +(Nick Craig-Wood) +.IP \[bu] 2 +Fix typo in log messages (nielash) +.RE +.IP \[bu] 2 +serve nfs: Fix writing files via Finder on macOS (nielash) +.IP \[bu] 2 +serve restic: Fix error handling (Michael Eischer) +.IP \[bu] 2 +serve webdav: Fix \f[C]--baseurl\f[R] without leading / (Nick +Craig-Wood) +.IP \[bu] 2 +stats: Fix race between ResetCounters and stopAverageLoop called from +time.AfterFunc (Nick Craig-Wood) +.IP \[bu] 2 +sync +.RS 2 +.IP \[bu] 2 +\f[C]--fix-case\f[R] flag to rename case insensitive dest (nielash) +.IP \[bu] 2 +Use operations.DirMove instead of sync.MoveDir for \f[C]--fix-case\f[R] +(nielash) +.RE +.IP \[bu] 2 +systemd: Fix detection and switch to the coreos package everywhere +rather than having 2 separate libraries (Anagh Kumar Baranwal) +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +Fix macOS not noticing errors with \f[C]--daemon\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +Notice daemon dying much quicker (Nick Craig-Wood) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Fix unicode normalization on macOS (nielash) +.RE +.IP \[bu] 2 +Bisync +.RS 2 +.IP \[bu] 2 +Copies and deletes are now handled in one operation instead of two +(nielash) +.IP \[bu] 2 +\f[C]--track-renames\f[R] and \f[C]--backup-dir\f[R] are now supported +(nielash) +.IP \[bu] 2 +Final listings are now generated from sync results, to avoid needing to +re-list (nielash) +.IP \[bu] 2 +Bisync is now much more resilient to changes that happen during a bisync +run, and far less prone to critical errors / undetected changes +(nielash) +.IP \[bu] 2 +Bisync is now capable of rolling a file listing back in cases of +uncertainty, essentially marking the file as needing to be rechecked +next time. +(nielash) +.IP \[bu] 2 +A few basic terminal colors are now supported, controllable with +\f[C]--color\f[R] (https://rclone.org/docs/#color-when) +(\f[C]AUTO\f[R]|\f[C]NEVER\f[R]|\f[C]ALWAYS\f[R]) (nielash) +.IP \[bu] 2 +Initial listing snapshots of Path1 and Path2 are now generated +concurrently, using the same \[dq]march\[dq] infrastructure as +\f[C]check\f[R] and \f[C]sync\f[R], for performance improvements and +less risk of error. +(nielash) +.IP \[bu] 2 +\f[C]--resync\f[R] is now much more efficient (especially for users of +\f[C]--create-empty-src-dirs\f[R]) (nielash) +.IP \[bu] 2 +Google Docs (and other files of unknown size) are now supported (with +the same options as in \f[C]sync\f[R]) (nielash) +.IP \[bu] 2 +Equality checks before a sync conflict rename now fall back to +\f[C]cryptcheck\f[R] (when possible) or \f[C]--download\f[R], (nielash) +instead of of \f[C]--size-only\f[R], when \f[C]check\f[R] is not +available. +.IP \[bu] 2 +Bisync now fully supports comparing based on any combination of size, +modtime, and checksum, lifting the prior restriction on backends without +modtime support. +(nielash) +.IP \[bu] 2 +Bisync now supports a \[dq]Graceful Shutdown\[dq] mode to cleanly cancel +a run early without requiring \f[C]--resync\f[R]. +(nielash) +.IP \[bu] 2 +New \f[C]--recover\f[R] flag allows robust recovery in the event of +interruptions, without requiring \f[C]--resync\f[R]. +(nielash) +.IP \[bu] 2 +A new \f[C]--max-lock\f[R] setting allows lock files to automatically +renew and expire, for better automatic recovery when a run is +interrupted. +(nielash) +.IP \[bu] 2 +Bisync now supports auto-resolving sync conflicts and customizing rename +behavior with new \f[C]--conflict-resolve\f[R], +\f[C]--conflict-loser\f[R], and \f[C]--conflict-suffix\f[R] flags. +(nielash) +.IP \[bu] 2 +A new \f[C]--resync-mode\f[R] flag allows more control over which +version of a file gets kept during a \f[C]--resync\f[R]. +(nielash) +.IP \[bu] 2 +Bisync now supports +\f[C]--retries\f[R] (https://rclone.org/docs/#retries-int) and +\f[C]--retries-sleep\f[R] (when \f[C]--resilient\f[R] is set.) (nielash) +.IP \[bu] 2 +Clarify file operation directions in dry-run logs (Kyle Reynolds) +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Fix cleanRootPath on Windows after go1.21.4 stdlib update (nielash) +.IP \[bu] 2 +Implement setting modification time on directories (nielash) +.IP \[bu] 2 +Implement modtime and metadata for directories (Nick Craig-Wood) +.IP \[bu] 2 +Fix setting of btime on directories on Windows (Nick Craig-Wood) +.IP \[bu] 2 +Delete backend implementation of Purge to speed up and make stats (Nick +Craig-Wood) +.IP \[bu] 2 +Support metadata setting and mapping on server side Move (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Cache +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Crypt +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.IP \[bu] 2 +Improve handling of undecryptable file names (nielash) +.IP \[bu] 2 +Add missing error check spotted by linter (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +Implement \f[C]--azureblob-delete-snapshots\f[R] (Nick Craig-Wood) +.RE +.IP \[bu] 2 +B2 +.RS 2 +.IP \[bu] 2 +Clarify exactly what \f[C]--b2-download-auth-duration\f[R] does in the +docs (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Chunker +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Combine +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.IP \[bu] 2 +Fix directory metadata error on upstream root (nielash) +.IP \[bu] 2 +Fix directory move across upstreams (nielash) +.RE +.IP \[bu] 2 +Compress +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Drive +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (nielash) +.IP \[bu] 2 +Implement modtime and metadata setting for directories (Nick Craig-Wood) +.IP \[bu] 2 +Support metadata setting and mapping on server side Move,Copy (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +FTP +.RS 2 +.IP \[bu] 2 +Fix mkdir with rsftp which is returning the wrong code (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Hasher +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.IP \[bu] 2 +Fix error from trying to stop an already-stopped db (nielash) +.IP \[bu] 2 +Look for cached hash if passed hash unexpectedly blank (nielash) +.RE +.IP \[bu] 2 +Imagekit +.RS 2 +.IP \[bu] 2 +Updated docs and web content (Harshit Budhraja) +.IP \[bu] 2 +Updated overview - supported operations (Harshit Budhraja) +.RE +.IP \[bu] 2 +Mega +.RS 2 +.IP \[bu] 2 +Fix panic with go1.22 (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Netstorage +.RS 2 +.IP \[bu] 2 +Fix Root to return correct directory when pointing to a file (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Onedrive +.RS 2 +.IP \[bu] 2 +Add metadata support (nielash) +.RE +.IP \[bu] 2 +Opendrive +.RS 2 +.IP \[bu] 2 +Fix moving file/folder within the same parent dir (nielash) +.RE +.IP \[bu] 2 +Oracle Object Storage +.RS 2 +.IP \[bu] 2 +Support \f[C]backend restore\f[R] command (Nikhil Ahuja) +.IP \[bu] 2 +Support workload identity authentication for OKE (Anders Swanson) +.RE +.IP \[bu] 2 +Protondrive +.RS 2 +.IP \[bu] 2 +Fix encoding of Root method (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Quatrix +.RS 2 +.IP \[bu] 2 +Fix \f[C]Content-Range\f[R] header (Volodymyr) +.IP \[bu] 2 +Add option to skip project folders (Oksana Zhykina) +.IP \[bu] 2 +Fix Root to return correct directory when pointing to a file (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Add \f[C]--s3-version-deleted\f[R] to show delete markers in listings +when using versions. +(Nick Craig-Wood) +.IP \[bu] 2 +Add IPv6 support with option \f[C]--s3-use-dual-stack\f[R] (Anthony +Metzidis) +.IP \[bu] 2 +Copy parts in parallel when doing chunked server side copy (Nick +Craig-Wood) +.IP \[bu] 2 +GCS provider: fix server side copy of files bigger than 5G (Nick +Craig-Wood) +.IP \[bu] 2 +Support metadata setting and mapping on server side Copy (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Seafile +.RS 2 +.IP \[bu] 2 +Fix download/upload error when \f[C]FILE_SERVER_ROOT\f[R] is relative +(DanielEgbers) +.IP \[bu] 2 +Fix Root to return correct directory when pointing to a file (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +SFTP +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (nielash) +.IP \[bu] 2 +Set directory modtimes update on write flag (Nick Craig-Wood) +.IP \[bu] 2 +Shorten wait delay for external ssh binaries now that we are using +go1.20 (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Swift +.RS 2 +.IP \[bu] 2 +Avoid unnecessary container versioning check (Joe Cai) +.RE +.IP \[bu] 2 +Union +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Reduce priority of chunks upload log (Gabriel Ramos) +.IP \[bu] 2 +owncloud: Add config \f[C]owncloud_exclude_shares\f[R] which allows to +exclude shared files and folders when listing remote resources (Thomas +M\[:u]ller) +.RE +.SS v1.65.2 - 2024-01-24 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.65.1...v1.65.2) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +build: bump github.com/cloudflare/circl from 1.3.6 to 1.3.7 (dependabot) +.IP \[bu] 2 +docs updates (Nick Craig-Wood, kapitainsky, nielash, Tera, Harshit +Budhraja) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Fix stale data when using \f[C]--vfs-cache-mode\f[R] full (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +\f[B]IMPORTANT\f[R] Fix data corruption bug - see +#7590 (https://github.com/rclone/rclone/issues/7590) (Nick Craig-Wood) +.RE +.SS v1.65.1 - 2024-01-08 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.65.0...v1.65.1) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Bump golang.org/x/crypto to fix ssh terrapin CVE-2023-48795 (dependabot) +.IP \[bu] 2 +Update to go1.21.5 to fix Windows path problems (Nick Craig-Wood) +.IP \[bu] 2 +Fix docker build on arm/v6 (Nick Craig-Wood) +.RE +.IP \[bu] 2 +install.sh: fix harmless error message on install (Nick Craig-Wood) +.IP \[bu] 2 +accounting: fix stats to show server side transfers (Nick Craig-Wood) +.IP \[bu] 2 +doc fixes (albertony, ben-ba, Eli Orzitzer, emyarod, keongalvin, +rarspace01) +.IP \[bu] 2 +nfsmount: Compile for all unix oses, add \f[C]--sudo\f[R] and fix +error/option handling (Nick Craig-Wood) +.IP \[bu] 2 +operations: Fix files moved by rclone move not being counted as +transfers (Nick Craig-Wood) +.IP \[bu] 2 +oauthutil: Avoid panic when \f[C]*token\f[R] and \f[C]*ts.token\f[R] are +the same (rkonfj) +.IP \[bu] 2 +serve s3: Fix listing oddities (Nick Craig-Wood) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Note that \f[C]--vfs-refresh\f[R] runs in the background (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Azurefiles +.RS 2 +.IP \[bu] 2 +Fix storage base url (Oksana) +.RE +.IP \[bu] 2 +Crypt +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Chunker +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Compress +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Dropbox +.RS 2 +.IP \[bu] 2 +Fix used space on dropbox team accounts (Nick Craig-Wood) +.RE +.IP \[bu] 2 +FTP +.RS 2 +.IP \[bu] 2 +Fix multi-thread copy (WeidiDeng) +.RE +.IP \[bu] 2 +Googlephotos +.RS 2 +.IP \[bu] 2 +Fix nil pointer exception when batch failed (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Hasher +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.IP \[bu] 2 +Fix invalid memory address error when MaxAge == 0 (nielash) +.RE +.IP \[bu] 2 +Onedrive +.RS 2 +.IP \[bu] 2 +Fix error listing: unknown object type \f[C]\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +Fix \[dq]unauthenticated: Unauthenticated\[dq] errors when uploading +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +Oracleobjectstorage +.RS 2 +.IP \[bu] 2 +Fix object storage endpoint for custom endpoints (Manoj Ghosh) +.IP \[bu] 2 +Multipart copy create bucket if it doesn\[aq]t exist. +(Manoj Ghosh) +.RE +.IP \[bu] 2 +Protondrive +.RS 2 +.IP \[bu] 2 +Fix CVE-2023-45286 / GHSA-xwh9-gc39-5298 (Nick Craig-Wood) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Fix crash if no UploadId in multipart upload (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Smb +.RS 2 +.IP \[bu] 2 +Fix shares not listed by updating go-smb2 (halms) +.RE +.IP \[bu] 2 +Union +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.RE .SS v1.65.0 - 2023-11-26 .PP See commits (https://github.com/rclone/rclone/compare/v1.64.0...v1.65.0) @@ -71868,10 +77454,14 @@ Project named rclone Project started .SH Bugs and Limitations .SS Limitations -.SS Directory timestamps aren\[aq]t preserved +.SS Directory timestamps aren\[aq]t preserved on some backends .PP -Rclone doesn\[aq]t currently preserve the timestamps of directories. -This is because rclone only really considers objects when syncing. +As of \f[C]v1.66\f[R], rclone supports syncing directory modtimes, if +the backend supports it. +Some backends do not support it -- see +overview (https://rclone.org/overview/) for a complete list. +Additionally, note that empty directories are not synced by default +(this can be enabled with \f[C]--create-empty-src-dirs\f[R].) .SS Rclone struggles with millions of files in a directory/bucket .PP Currently rclone loads each directory/bucket entirely into memory before @@ -72323,7 +77913,7 @@ Bj\[/o]rn Erik Pedersen .IP \[bu] 2 Lukas Loesche .IP \[bu] 2 -emyarod +emyarod .IP \[bu] 2 T.C. Ferguson @@ -73835,6 +79425,48 @@ Alen \[vS]iljak \[u4F60]\[u77E5]\[u9053]\[u672A]\[u6765]\[u5417] .IP \[bu] 2 Abhinav Dhiman <8640877+ahnv@users.noreply.github.com> +.IP \[bu] 2 +halms <7513146+halms@users.noreply.github.com> +.IP \[bu] 2 +ben-ba +.IP \[bu] 2 +Eli Orzitzer +.IP \[bu] 2 +Anthony Metzidis +.IP \[bu] 2 +emyarod +.IP \[bu] 2 +keongalvin +.IP \[bu] 2 +rarspace01 +.IP \[bu] 2 +Paul Stern +.IP \[bu] 2 +Nikhil Ahuja +.IP \[bu] 2 +Harshit Budhraja <52413945+harshit-budhraja@users.noreply.github.com> +.IP \[bu] 2 +Tera <24725862+teraa@users.noreply.github.com> +.IP \[bu] 2 +Kyle Reynolds +.IP \[bu] 2 +Michael Eischer +.IP \[bu] 2 +Thomas M\[:u]ller <1005065+DeepDiver1975@users.noreply.github.com> +.IP \[bu] 2 +DanielEgbers <27849724+DanielEgbers@users.noreply.github.com> +.IP \[bu] 2 +Jack Provance <49460795+njprov@users.noreply.github.com> +.IP \[bu] 2 +Gabriel Ramos <109390599+gabrielramos02@users.noreply.github.com> +.IP \[bu] 2 +Dan McArdle +.IP \[bu] 2 +Joe Cai +.IP \[bu] 2 +Anders Swanson +.IP \[bu] 2 +huajin tong <137764712+thirdkeyword@users.noreply.github.com> .SH Contact the rclone project .SS Forum .PP