mirror of
https://github.com/rclone/rclone.git
synced 2024-11-22 12:36:38 +08:00
Version v1.56.0
This commit is contained in:
parent
c67c1ab4ee
commit
37ff05a5fa
2451
MANUAL.html
generated
2451
MANUAL.html
generated
File diff suppressed because it is too large
Load Diff
3139
MANUAL.txt
generated
3139
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
|
@ -2,3 +2,4 @@
|
|||
<nick@raig-wood.com>
|
||||
<anaghk.dos@gmail.com>
|
||||
<33207650+sp31415t1@users.noreply.github.com>
|
||||
<unknown>
|
||||
|
|
|
@ -23,6 +23,7 @@ docs = [
|
|||
"rc.md",
|
||||
"overview.md",
|
||||
"flags.md",
|
||||
"docker.md",
|
||||
|
||||
# Keep these alphabetical by full name
|
||||
"fichier.md",
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Alias"
|
|||
description: "Remote Aliases"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-link" >}} Alias
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-link" >}} Alias
|
||||
|
||||
The `alias` remote provides a new name for another remote.
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Amazon Drive"
|
|||
description: "Rclone docs for Amazon Drive"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-amazon" >}} Amazon Drive
|
||||
-----------------------------------------
|
||||
# {{< icon "fab fa-amazon" >}} Amazon Drive
|
||||
|
||||
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
|
||||
service run by Amazon for consumers.
|
||||
|
@ -260,7 +259,7 @@ Files >= this size will be downloaded via their tempLink.
|
|||
|
||||
Files this size or more will be downloaded via their "tempLink". This
|
||||
is to work around a problem with Amazon Drive which blocks downloads
|
||||
of files bigger than about 10 GiB. The default for this is 9 GiB which
|
||||
of files bigger than about 10 GiB. The default for this is 9 GiB which
|
||||
shouldn't need to be changed.
|
||||
|
||||
To download files above this threshold, rclone requests a "tempLink"
|
||||
|
@ -270,7 +269,7 @@ underlying S3 storage.
|
|||
- Config: templink_threshold
|
||||
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
|
||||
- Type: SizeSuffix
|
||||
- Default: 9G
|
||||
- Default: 9Gi
|
||||
|
||||
#### --acd-encoding
|
||||
|
||||
|
|
|
@ -431,7 +431,7 @@ put them back in again.` >}}
|
|||
* Laurens Janssen <BD69BM@insim.biz>
|
||||
* Bob Bagwill <bobbagwill@gmail.com>
|
||||
* Nathan Collins <colli372@msu.edu>
|
||||
* lostheli <unknown>
|
||||
* lostheli
|
||||
* kelv <kelvin@acks.org>
|
||||
* Milly <milly.ca@gmail.com>
|
||||
* gtorelly <gtorelly@gmail.com>
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Microsoft Azure Blob Storage"
|
|||
description: "Rclone docs for Microsoft Azure Blob Storage"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-windows" >}} Microsoft Azure Blob Storage
|
||||
-----------------------------------------
|
||||
# {{< icon "fab fa-windows" >}} Microsoft Azure Blob Storage
|
||||
|
||||
Paths are specified as `remote:container` (or `remote:` for the `lsd`
|
||||
command.) You may put subdirectories in too, e.g.
|
||||
|
@ -285,7 +284,7 @@ Note that this is stored in memory and there may be up to
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 4M
|
||||
- Default: 4Mi
|
||||
|
||||
#### --azureblob-list-chunk
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "B2"
|
|||
description: "Backblaze B2"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-fire" >}} Backblaze B2
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-fire" >}} Backblaze B2
|
||||
|
||||
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
|
||||
|
||||
|
@ -406,7 +405,7 @@ This value should be set no larger than 4.657 GiB (== 5 GB).
|
|||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 200M
|
||||
- Default: 200Mi
|
||||
|
||||
#### --b2-copy-cutoff
|
||||
|
||||
|
@ -420,7 +419,7 @@ The minimum is 0 and the maximum is 4.6 GiB.
|
|||
- Config: copy_cutoff
|
||||
- Env Var: RCLONE_B2_COPY_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 4G
|
||||
- Default: 4Gi
|
||||
|
||||
#### --b2-chunk-size
|
||||
|
||||
|
@ -434,7 +433,7 @@ minimum size.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_B2_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 96M
|
||||
- Default: 96Mi
|
||||
|
||||
#### --b2-disable-checksum
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Box"
|
|||
description: "Rclone docs for Box"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-archive" >}} Box
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-archive" >}} Box
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
@ -374,7 +373,7 @@ Cutoff for switching to multipart upload (>= 50 MiB).
|
|||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 50M
|
||||
- Default: 50Mi
|
||||
|
||||
#### --box-commit-retries
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Cache"
|
|||
description: "Rclone docs for cache remote"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-archive" >}} Cache (DEPRECATED)
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-archive" >}} Cache (DEPRECATED)
|
||||
|
||||
The `cache` remote wraps another existing remote and stores file structure
|
||||
and its data for long running tasks like `rclone mount`.
|
||||
|
@ -361,9 +360,9 @@ will need to be cleared or unexpected EOF errors will occur.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_CACHE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 5M
|
||||
- Default: 5Mi
|
||||
- Examples:
|
||||
- "1m"
|
||||
- "1M"
|
||||
- 1 MiB
|
||||
- "5M"
|
||||
- 5 MiB
|
||||
|
@ -398,7 +397,7 @@ oldest chunks until it goes under this value.
|
|||
- Config: chunk_total_size
|
||||
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 10G
|
||||
- Default: 10Gi
|
||||
- Examples:
|
||||
- "500M"
|
||||
- 500 MiB
|
||||
|
|
|
@ -5,6 +5,149 @@ description: "Rclone Changelog"
|
|||
|
||||
# Changelog
|
||||
|
||||
## v1.56.0 - 2021-07-20
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.56.0)
|
||||
|
||||
* New backends
|
||||
* [Uptobox](/uptobox/) (buengese)
|
||||
* New commands
|
||||
* [serve docker](/commands/rclone_serve_docker/) (Antoine GIRARD) (Ivan Andreev)
|
||||
* and accompanying [docker volume plugin](/docker/)
|
||||
* [checksum](/commands/rclone_checksum/) to check files against a file of checksums (Ivan Andreev)
|
||||
* this is also available as `rclone md5sum -C` etc
|
||||
* [config touch](/commands/rclone_config_touch/): ensure config exists at configured location (albertony)
|
||||
* [test changenotify](/commands/rclone_test_changenotify/): command to help debugging changenotify (Nick Craig-Wood)
|
||||
* Deprecations
|
||||
* `dbhashsum`: Remove command deprecated a year ago (Ivan Andreev)
|
||||
* `cache`: Deprecate cache backend (Ivan Andreev)
|
||||
* New Features
|
||||
* rework config system so it can be used non-interactively via cli and rc API.
|
||||
* See docs in [config create](/commands/rclone_config_create/)
|
||||
* This is a very big change to all the backends so may cause breakages - please file bugs!
|
||||
* librclone - export the rclone RC as a C library (lewisxy) (Nick Craig-Wood)
|
||||
* Link a C-API rclone shared object into your project
|
||||
* Use the RC as an in memory interface
|
||||
* Python example supplied
|
||||
* Also supports Android and gomobile
|
||||
* fs
|
||||
* Add `--disable-http2` for global http2 disable (Nick Craig-Wood)
|
||||
* Make `--dump` imply `-vv` (Alex Chen)
|
||||
* Use binary prefixes for size and rate units (albertony)
|
||||
* Use decimal prefixes for counts (albertony)
|
||||
* Add google search widget to rclone.org (Ivan Andreev)
|
||||
* accounting: Calculate rolling average speed (Haochen Tong)
|
||||
* atexit: Terminate with non-zero status after receiving signal (Michael Hanselmann)
|
||||
* build
|
||||
* Only run event-based workflow scripts under rclone repo with manual override (Mathieu Carbou)
|
||||
* Add Android build with gomobile (x0b)
|
||||
* check: Log the hash in use like cryptcheck does (Nick Craig-Wood)
|
||||
* version: Print os/version, kernel and bitness (Ivan Andreev)
|
||||
* config
|
||||
* Prevent use of Windows reserved names in config file name (albertony)
|
||||
* Create config file in windows appdata directory by default (albertony)
|
||||
* Treat any config file paths with filename notfound as memory-only config (albertony)
|
||||
* Delay load config file (albertony)
|
||||
* Replace defaultConfig with a thread-safe in-memory implementation (Chris Macklin)
|
||||
* Allow `config create` and friends to take `key=value` parameters (Nick Craig-Wood)
|
||||
* Fixed issues with flags/options set by environment vars. (Ole Frost)
|
||||
* fshttp: Implement graceful DSCP error handling (Tyson Moore)
|
||||
* lib/http - provides an abstraction for a central http server that services can bind routes to (Nolan Woods)
|
||||
* Add `--template` config and flags to serve/data (Nolan Woods)
|
||||
* Add default 404 handler (Nolan Woods)
|
||||
* link: Use "off" value for unset expiry (Nick Craig-Wood)
|
||||
* oauthutil: Raise fatal error if token expired without refresh token (Alex Chen)
|
||||
* rcat: Add `--size` flag for more efficient uploads of known size (Nazar Mishturak)
|
||||
* serve sftp: Add `--stdio` flag to serve via stdio (Tom)
|
||||
* sync: Don't warn about `--no-traverse` when `--files-from` is set (Nick Gaya)
|
||||
* `test makefiles`
|
||||
* Add `--seed` flag and make data generated repeatable (Nick Craig-Wood)
|
||||
* Add log levels and speed summary (Nick Craig-Wood)
|
||||
* Bug Fixes
|
||||
* accounting: Fix startTime of statsGroups.sum (Haochen Tong)
|
||||
* cmd/ncdu: Fix out of range panic in delete (buengese)
|
||||
* config
|
||||
* Fix issues with memory-only config file paths (albertony)
|
||||
* Fix in memory config not saving on the fly backend config (Nick Craig-Wood)
|
||||
* fshttp: Fix address parsing for DSCP (Tyson Moore)
|
||||
* ncdu: Update termbox-go library to fix crash (Nick Craig-Wood)
|
||||
* oauthutil: Fix old authorize result not recognised (Cnly)
|
||||
* operations: Don't update timestamps of files in `--compare-dest` (Nick Gaya)
|
||||
* selfupdate: fix archive name on macos (Ivan Andreev)
|
||||
* Mount
|
||||
* Refactor before adding serve docker (Antoine GIRARD)
|
||||
* VFS
|
||||
* Add cache reset for `--vfs-cache-max-size` handling at cache poll interval (Leo Luan)
|
||||
* Fix modtime changing when reading file into cache (Nick Craig-Wood)
|
||||
* Avoid unnecessary subdir in cache path (albertony)
|
||||
* Fix that umask option cannot be set as environment variable (albertony)
|
||||
* Do not print notice about missing poll-interval support when set to 0 (albertony)
|
||||
* Local
|
||||
* Always use readlink to read symlink size for better compatibility (Nick Craig-Wood)
|
||||
* Add `--local-unicode-normalization` (and remove `--local-no-unicode-normalization`) (Nick Craig-Wood)
|
||||
* Skip entries removed concurrently with List() (Ivan Andreev)
|
||||
* Crypt
|
||||
* Support timestamped filenames from `--b2-versions` (Dominik Mydlil)
|
||||
* B2
|
||||
* Don't include the bucket name in public link file prefixes (Jeffrey Tolar)
|
||||
* Fix versions and .files with no extension (Nick Craig-Wood)
|
||||
* Factor version handling into lib/version (Dominik Mydlil)
|
||||
* Box
|
||||
* Use upload preflight check to avoid listings in file uploads (Nick Craig-Wood)
|
||||
* Return errors instead of calling log.Fatal with them (Nick Craig-Wood)
|
||||
* Drive
|
||||
* Switch to the Drives API for looking up shared drives (Nick Craig-Wood)
|
||||
* Fix some google docs being treated as files (Nick Craig-Wood)
|
||||
* Dropbox
|
||||
* Add `--dropbox-batch-mode` flag to speed up uploading (Nick Craig-Wood)
|
||||
* Read the [batch mode](/dropbox/#batch-mode) docs for more info
|
||||
* Set visibility in link sharing when `--expire` is set (Nick Craig-Wood)
|
||||
* Simplify chunked uploads (Alexey Ivanov)
|
||||
* Improve "own App IP" instructions (Ivan Andreev)
|
||||
* Fichier
|
||||
* Check if more than one upload link is returned (Nick Craig-Wood)
|
||||
* Support downloading password protected files and folders (Florian Penzkofer)
|
||||
* Make error messages report text from the API (Nick Craig-Wood)
|
||||
* Fix move of files in the same directory (Nick Craig-Wood)
|
||||
* Check that we actually got a download token and retry if we didn't (buengese)
|
||||
* Filefabric
|
||||
* Fix listing after change of from field from "int" to int. (Nick Craig-Wood)
|
||||
* FTP
|
||||
* Make upload error 250 indicate success (Nick Craig-Wood)
|
||||
* GCS
|
||||
* Make compatible with gsutil's mtime metadata (database64128)
|
||||
* Clean up time format constants (database64128)
|
||||
* Google Photos
|
||||
* Fix read only scope not being used properly (Nick Craig-Wood)
|
||||
* HTTP
|
||||
* Replace httplib with lib/http (Nolan Woods)
|
||||
* Clean up Bind to better use middleware (Nolan Woods)
|
||||
* Jottacloud
|
||||
* Fix legacy auth with state based config system (buengese)
|
||||
* Fix invalid url in output from link command (albertony)
|
||||
* Add no versions option (buengese)
|
||||
* Onedrive
|
||||
* Add `list_chunk option` (Nick Gaya)
|
||||
* Also report root error if unable to cancel multipart upload (Cnly)
|
||||
* Fix failed to configure: empty token found error (Nick Craig-Wood)
|
||||
* Make link return direct download link (Xuanchen Wu)
|
||||
* S3
|
||||
* Add `--s3-no-head-object` (Tatsuya Noyori)
|
||||
* Remove WebIdentityRoleProvider to fix crash on auth (Nick Craig-Wood)
|
||||
* Don't check to see if remote is object if it ends with / (Nick Craig-Wood)
|
||||
* Add SeaweedFS (Chris Lu)
|
||||
* Update Alibaba OSS endpoints (Chuan Zh)
|
||||
* SFTP
|
||||
* Fix performance regression by re-enabling concurrent writes (Nick Craig-Wood)
|
||||
* Expand tilde and environment variables in configured `known_hosts_file` (albertony)
|
||||
* Tardigrade
|
||||
* Upgrade to uplink v1.4.6 (Caleb Case)
|
||||
* Use negative offset (Caleb Case)
|
||||
* Add warning about `too many open files` (acsfer)
|
||||
* WebDAV
|
||||
* Fix sharepoint auth over http (Nick Craig-Wood)
|
||||
* Add headers option (Antoon Prins)
|
||||
|
||||
## v1.55.1 - 2021-04-26
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.55.1)
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Chunker"
|
|||
description: "Split-chunking overlay remote"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-cut" >}}Chunker (BETA)
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-cut" >}}Chunker (BETA)
|
||||
|
||||
The `chunker` overlay transparently splits large files into smaller chunks
|
||||
during upload to wrapped remote and transparently assembles them back
|
||||
|
@ -332,7 +331,7 @@ Files larger than chunk size will be split in chunks.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 2G
|
||||
- Default: 2Gi
|
||||
|
||||
#### --chunker-hash-type
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ you would do:
|
|||
If the remote uses OAuth the token will be updated, if you don't
|
||||
require this add an extra parameter thus:
|
||||
|
||||
rclone config update myremote swift env_auth=true config_refresh_token=false
|
||||
rclone config update myremote env_auth=true config_refresh_token=false
|
||||
|
||||
Note that if the config process would normally ask a question the
|
||||
default is taken (unless `--non-interactive` is used). Each time
|
||||
|
|
|
@ -18,7 +18,7 @@ FUSE.
|
|||
|
||||
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
|
||||
|
||||
On Linux and macOS, you can either run mount in foreground mode or background (daemon) mode.
|
||||
On Linux and OSX, you can either run mount in foreground mode or background (daemon) mode.
|
||||
Mount runs in foreground mode by default, use the `--daemon` flag to specify background mode.
|
||||
You can only run mount in foreground mode on Windows.
|
||||
|
||||
|
@ -608,7 +608,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
|
|||
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
|
||||
--read-only Mount read-only.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
|
|
|
@ -33,6 +33,7 @@ Here are the keys - press '?' to toggle the help on and off
|
|||
a toggle average size in directory
|
||||
n,s,C,A sort by name,size,count,average size
|
||||
d delete file/directory
|
||||
y copy current path to clipboard
|
||||
Y display current path
|
||||
^L refresh screen
|
||||
? to toggle help on and off
|
||||
|
|
|
@ -319,7 +319,7 @@ rclone serve dlna remote:path [flags]
|
|||
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
|
||||
--read-only Mount read-only.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
|
|
|
@ -354,7 +354,7 @@ rclone serve docker [flags]
|
|||
--socket-addr string <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
|
||||
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
|
|
|
@ -403,7 +403,7 @@ rclone serve ftp remote:path [flags]
|
|||
--public-ip string Public IP address to advertise for passive connections.
|
||||
--read-only Mount read-only.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--user string User name for authentication. (default "anonymous")
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
|
|
|
@ -398,7 +398,7 @@ rclone serve http remote:path [flags]
|
|||
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
|
||||
--template string User Specified Template.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--user string User name for authentication.
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
|
|
|
@ -419,7 +419,7 @@ rclone serve sftp remote:path [flags]
|
|||
--read-only Mount read-only.
|
||||
--stdio Run an sftp server on run stdin/stdout
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--user string User name for authentication.
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
|
|
|
@ -491,7 +491,7 @@ rclone serve webdav remote:path [flags]
|
|||
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
|
||||
--template string User Specified Template.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--user string User name for authentication.
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
|
|
|
@ -37,6 +37,6 @@ See the [global flags page](/flags/) for global options not listed here.
|
|||
* [rclone test changenotify](/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in.
|
||||
* [rclone test histogram](/commands/rclone_test_histogram/) - Makes a histogram of file name characters.
|
||||
* [rclone test info](/commands/rclone_test_info/) - Discovers file name or other limitations for paths.
|
||||
* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in <dir>
|
||||
* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory
|
||||
* [rclone test memory](/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats.
|
||||
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
---
|
||||
title: "rclone test makefiles"
|
||||
description: "Make a random file hierarchy in <dir>"
|
||||
description: "Make a random file hierarchy in a directory"
|
||||
slug: rclone_test_makefiles
|
||||
url: /commands/rclone_test_makefiles/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/makefiles/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone test makefiles
|
||||
|
||||
Make a random file hierarchy in <dir>
|
||||
Make a random file hierarchy in a directory
|
||||
|
||||
```
|
||||
rclone test makefiles <dir> [flags]
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Compress"
|
|||
description: "Compression Remote"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-compress" >}}Compress (Experimental)
|
||||
-----------------------------------------
|
||||
# {{< icon "fas fa-compress" >}}Compress (Experimental)
|
||||
|
||||
### Warning
|
||||
This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is
|
||||
|
@ -142,6 +141,6 @@ Some remotes don't allow the upload of files with unknown size.
|
|||
- Config: ram_cache_limit
|
||||
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
|
||||
- Type: SizeSuffix
|
||||
- Default: 20M
|
||||
- Default: 20Mi
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Crypt"
|
|||
description: "Encryption overlay remote"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-lock" >}}Crypt
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-lock" >}}Crypt
|
||||
|
||||
Rclone `crypt` remotes encrypt and decrypt other remotes.
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Google drive"
|
|||
description: "Rclone docs for Google drive"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-google" >}} Google Drive
|
||||
-----------------------------------------
|
||||
# {{< icon "fab fa-google" >}} Google Drive
|
||||
|
||||
Paths are specified as `drive:path`
|
||||
|
||||
|
@ -868,7 +867,7 @@ Cutoff for switching to chunked upload
|
|||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 8M
|
||||
- Default: 8Mi
|
||||
|
||||
#### --drive-chunk-size
|
||||
|
||||
|
@ -882,7 +881,7 @@ Reducing this will reduce memory usage but decrease performance.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_DRIVE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 8M
|
||||
- Default: 8Mi
|
||||
|
||||
#### --drive-acknowledge-abuse
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Dropbox"
|
|||
description: "Rclone docs for Dropbox"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-dropbox" >}} Dropbox
|
||||
---------------------------------
|
||||
# {{< icon "fab fa-dropbox" >}} Dropbox
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
@ -238,7 +237,7 @@ Leave blank to use the provider defaults.
|
|||
|
||||
#### --dropbox-chunk-size
|
||||
|
||||
Upload chunk size. (< 150M).
|
||||
Upload chunk size. (< 150Mi).
|
||||
|
||||
Any files larger than this will be uploaded in chunks of this size.
|
||||
|
||||
|
@ -250,7 +249,7 @@ memory. It can be set smaller if you are tight on memory.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 48M
|
||||
- Default: 48Mi
|
||||
|
||||
#### --dropbox-impersonate
|
||||
|
||||
|
@ -309,6 +308,75 @@ shared folder.
|
|||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --dropbox-batch-mode
|
||||
|
||||
Upload file batching sync|async|off.
|
||||
|
||||
This sets the batch mode used by rclone.
|
||||
|
||||
For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)
|
||||
|
||||
This has 3 possible values
|
||||
|
||||
- off - no batching
|
||||
- sync - batch uploads and check completion (default)
|
||||
- async - batch upload and don't check completion
|
||||
|
||||
Rclone will close any outstanding batches when it exits which may make
|
||||
a delay on quit.
|
||||
|
||||
|
||||
- Config: batch_mode
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_MODE
|
||||
- Type: string
|
||||
- Default: "sync"
|
||||
|
||||
#### --dropbox-batch-size
|
||||
|
||||
Max number of files in upload batch.
|
||||
|
||||
This sets the batch size of files to upload. It has to be less than 1000.
|
||||
|
||||
By default this is 0 which means rclone which calculate the batch size
|
||||
depending on the setting of batch_mode.
|
||||
|
||||
- batch_mode: async - default batch_size is 100
|
||||
- batch_mode: sync - default batch_size is the same as --transfers
|
||||
- batch_mode: off - not in use
|
||||
|
||||
Rclone will close any outstanding batches when it exits which may make
|
||||
a delay on quit.
|
||||
|
||||
Setting this is a great idea if you are uploading lots of small files
|
||||
as it will make them a lot quicker. You can use --transfers 32 to
|
||||
maximise throughput.
|
||||
|
||||
|
||||
- Config: batch_size
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_SIZE
|
||||
- Type: int
|
||||
- Default: 0
|
||||
|
||||
#### --dropbox-batch-timeout
|
||||
|
||||
Max time to allow an idle upload batch before uploading
|
||||
|
||||
If an upload batch is idle for more than this long then it will be
|
||||
uploaded.
|
||||
|
||||
The default for this is 0 which means rclone will choose a sensible
|
||||
default based on the batch_mode in use.
|
||||
|
||||
- batch_mode: async - default batch_timeout is 500ms
|
||||
- batch_mode: sync - default batch_timeout is 10s
|
||||
- batch_mode: off - not in use
|
||||
|
||||
|
||||
- Config: batch_timeout
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT
|
||||
- Type: Duration
|
||||
- Default: 0s
|
||||
|
||||
#### --dropbox-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "1Fichier"
|
|||
description: "Rclone docs for 1Fichier"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-archive" >}} 1Fichier
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-archive" >}} 1Fichier
|
||||
|
||||
This is a backend for the [1fichier](https://1fichier.com) cloud
|
||||
storage service. Note that a Premium subscription is required to use
|
||||
|
@ -139,6 +138,28 @@ If you want to download a shared folder, add this parameter
|
|||
- Type: string
|
||||
- Default: ""
|
||||
|
||||
#### --fichier-file-password
|
||||
|
||||
If you want to download a shared file that is password protected, add this parameter
|
||||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
- Config: file_password
|
||||
- Env Var: RCLONE_FICHIER_FILE_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
|
||||
#### --fichier-folder-password
|
||||
|
||||
If you want to list the files in a shared folder that is password protected, add this parameter
|
||||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
- Config: folder_password
|
||||
- Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
|
||||
#### --fichier-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Enterprise File Fabric"
|
|||
description: "Rclone docs for the Enterprise File Fabric backend"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-cloud" >}} Enterprise File Fabric
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-cloud" >}} Enterprise File Fabric
|
||||
|
||||
This backend supports [Storage Made Easy's Enterprise File
|
||||
Fabric™](https://storagemadeeasy.com/about/) which provides a software
|
||||
|
|
|
@ -154,7 +154,7 @@ These flags are available for every command.
|
|||
--use-json-log Use json log format.
|
||||
--use-mmap Use mmap allocator (see docs).
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0-beta.5531.41f561bf2.pr-commanddocs")
|
||||
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
```
|
||||
|
||||
|
@ -311,6 +311,8 @@ and may be set in the config file.
|
|||
--dropbox-token-url string Token server url.
|
||||
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
|
||||
--fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
|
||||
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
|
||||
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
|
||||
--fichier-shared-folder string If you want to download a shared folder, add this parameter
|
||||
--filefabric-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--filefabric-permanent-token string Permanent Authentication Token
|
||||
|
@ -375,6 +377,7 @@ and may be set in the config file.
|
|||
--jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
|
||||
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
|
||||
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10Mi)
|
||||
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them.
|
||||
--jottacloud-trashed-only Only show files that are in the trash.
|
||||
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10Mi)
|
||||
--koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
|
@ -587,7 +590,7 @@ and may be set in the config file.
|
|||
--zoho-client-id string OAuth Client Id
|
||||
--zoho-client-secret string OAuth Client Secret
|
||||
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
|
||||
--zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
|
||||
--zoho-region string Zoho region to connect to.
|
||||
--zoho-token string OAuth Access Token as a JSON blob.
|
||||
--zoho-token-url string Token server url.
|
||||
```
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "FTP"
|
|||
description: "Rclone docs for FTP backend"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-file" >}} FTP
|
||||
------------------------------
|
||||
# {{< icon "fa fa-file" >}} FTP
|
||||
|
||||
FTP is the File Transfer Protocol. Rclone FTP support is provided using the
|
||||
[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp)
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Google Cloud Storage"
|
|||
description: "Rclone docs for Google Cloud Storage"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-google" >}} Google Cloud Storage
|
||||
-------------------------------------------------
|
||||
# {{< icon "fab fa-google" >}} Google Cloud Storage
|
||||
|
||||
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
|
||||
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Google Photos"
|
|||
description: "Rclone docs for Google Photos"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-images" >}} Google Photos
|
||||
-------------------------------------------------
|
||||
# {{< icon "fa fa-images" >}} Google Photos
|
||||
|
||||
The rclone backend for [Google Photos](https://www.google.com/photos/about/) is
|
||||
a specialized backend for transferring photos and videos to and from
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "HDFS Remote"
|
|||
description: "Remote for Hadoop Distributed Filesystem"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-globe" >}} HDFS
|
||||
-------------------------------------------------
|
||||
# {{< icon "fa fa-globe" >}} HDFS
|
||||
|
||||
[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a
|
||||
distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework.
|
||||
|
@ -190,7 +189,7 @@ Here are the advanced options specific to hdfs (Hadoop distributed file system).
|
|||
Kerberos service principal name for the namenode
|
||||
|
||||
Enables KERBEROS authentication. Specifies the Service Principal Name
|
||||
(<SERVICE>/<FQDN>) for the namenode.
|
||||
(SERVICE/FQDN) for the namenode.
|
||||
|
||||
- Config: service_principal_name
|
||||
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "HTTP Remote"
|
|||
description: "Read only remote for HTTP servers"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-globe" >}} HTTP
|
||||
-------------------------------------------------
|
||||
# {{< icon "fa fa-globe" >}} HTTP
|
||||
|
||||
The HTTP remote is a read only remote for reading files of a
|
||||
webserver. The webserver should provide file listings which rclone
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Hubic"
|
|||
description: "Rclone docs for Hubic"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-space-shuttle" >}} Hubic
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-space-shuttle" >}} Hubic
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
@ -173,7 +172,7 @@ default for this is 5 GiB which is its maximum value.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_HUBIC_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 5G
|
||||
- Default: 5Gi
|
||||
|
||||
#### --hubic-no-chunk
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Jottacloud"
|
|||
description: "Rclone docs for Jottacloud"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-cloud" >}} Jottacloud
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-cloud" >}} Jottacloud
|
||||
|
||||
Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway.
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Koofr"
|
|||
description: "Rclone docs for Koofr"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-suitcase" >}} Koofr
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-suitcase" >}} Koofr
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Local Filesystem"
|
|||
description: "Rclone docs for the local filesystem"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-hdd" >}} Local Filesystem
|
||||
-------------------------------------------
|
||||
# {{< icon "fas fa-hdd" >}} Local Filesystem
|
||||
|
||||
Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so
|
||||
|
||||
|
@ -367,32 +366,41 @@ points, as you explicitly acknowledge that they should be skipped.
|
|||
|
||||
#### --local-zero-size-links
|
||||
|
||||
Assume the Stat size of links is zero (and read them instead)
|
||||
Assume the Stat size of links is zero (and read them instead) (Deprecated)
|
||||
|
||||
On some virtual filesystems (such ash LucidLink), reading a link size via a Stat call always returns 0.
|
||||
However, on unix it reads as the length of the text in the link. This may cause errors like this when
|
||||
syncing:
|
||||
Rclone used to use the Stat size of links as the link size, but this fails in quite a few places
|
||||
|
||||
Failed to copy: corrupted on transfer: sizes differ 0 vs 13
|
||||
- Windows
|
||||
- On some virtual filesystems (such ash LucidLink)
|
||||
- Android
|
||||
|
||||
So rclone now always reads the link
|
||||
|
||||
Setting this flag causes rclone to read the link and use that as the size of the link
|
||||
instead of 0 which in most cases fixes the problem.
|
||||
|
||||
- Config: zero_size_links
|
||||
- Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --local-no-unicode-normalization
|
||||
#### --local-unicode-normalization
|
||||
|
||||
Don't apply unicode normalization to paths and filenames (Deprecated)
|
||||
Apply unicode NFC normalization to paths and filenames
|
||||
|
||||
This flag is deprecated now. Rclone no longer normalizes unicode file
|
||||
names, but it compares them with unicode normalization in the sync
|
||||
routine instead.
|
||||
This flag can be used to normalize file names into unicode NFC form
|
||||
that are read from the local filesystem.
|
||||
|
||||
- Config: no_unicode_normalization
|
||||
- Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
|
||||
Rclone does not normally touch the encoding of file names it reads from
|
||||
the file system.
|
||||
|
||||
This can be useful when using macOS as it normally provides decomposed (NFD)
|
||||
unicode which in some language (eg Korean) doesn't display properly on
|
||||
some OSes.
|
||||
|
||||
Note that rclone compares filenames with unicode normalization in the sync
|
||||
routine so this flag shouldn't normally be used.
|
||||
|
||||
- Config: unicode_normalization
|
||||
- Env Var: RCLONE_LOCAL_UNICODE_NORMALIZATION
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Mailru"
|
|||
description: "Mail.ru Cloud"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-at" >}} Mail.ru Cloud
|
||||
----------------------------------------
|
||||
# {{< icon "fas fa-at" >}} Mail.ru Cloud
|
||||
|
||||
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
|
||||
|
||||
|
@ -241,7 +240,7 @@ This option allows you to disable speedup (put by hash) for large files
|
|||
- Config: speedup_max_disk
|
||||
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
|
||||
- Type: SizeSuffix
|
||||
- Default: 3G
|
||||
- Default: 3Gi
|
||||
- Examples:
|
||||
- "0"
|
||||
- Completely disable speedup (put by hash).
|
||||
|
@ -257,7 +256,7 @@ Files larger than the size given below will always be hashed on disk.
|
|||
- Config: speedup_max_memory
|
||||
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
|
||||
- Type: SizeSuffix
|
||||
- Default: 32M
|
||||
- Default: 32Mi
|
||||
- Examples:
|
||||
- "0"
|
||||
- Preliminary hashing will always be done in a temporary disk location.
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Mega"
|
|||
description: "Rclone docs for Mega"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-archive" >}} Mega
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-archive" >}} Mega
|
||||
|
||||
[Mega](https://mega.nz/) is a cloud storage and file hosting service
|
||||
known for its security feature where all files are encrypted locally
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Memory"
|
|||
description: "Rclone docs for Memory backend"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-memory" >}} Memory
|
||||
-----------------------------------------
|
||||
# {{< icon "fas fa-memory" >}} Memory
|
||||
|
||||
The memory backend is an in RAM backend. It does not persist its
|
||||
data - use the local backend for that.
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Microsoft OneDrive"
|
|||
description: "Rclone docs for Microsoft OneDrive"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-windows" >}} Microsoft OneDrive
|
||||
-----------------------------------------
|
||||
# {{< icon "fab fa-windows" >}} Microsoft OneDrive
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
@ -277,7 +276,7 @@ Note that the chunks will be buffered into memory.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 10M
|
||||
- Default: 10Mi
|
||||
|
||||
#### --onedrive-drive-id
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "OpenDrive"
|
|||
description: "Rclone docs for OpenDrive"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-file" >}} OpenDrive
|
||||
------------------------------------
|
||||
# {{< icon "fa fa-file" >}} OpenDrive
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
@ -148,7 +147,7 @@ increase memory use.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 10M
|
||||
- Default: 10Mi
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "pCloud"
|
|||
description: "Rclone docs for pCloud"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-cloud" >}} pCloud
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-cloud" >}} pCloud
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "premiumize.me"
|
|||
description: "Rclone docs for premiumize.me"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-user" >}} premiumize.me
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-user" >}} premiumize.me
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "put.io"
|
|||
description: "Rclone docs for put.io"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-parking" >}} put.io
|
||||
---------------------------------
|
||||
# {{< icon "fas fa-parking" >}} put.io
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "QingStor"
|
|||
description: "Rclone docs for QingStor Object Storage"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-hdd" >}} QingStor
|
||||
---------------------------------------
|
||||
# {{< icon "fas fa-hdd" >}} QingStor
|
||||
|
||||
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
|
||||
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
|
||||
|
@ -232,7 +231,7 @@ The minimum is 0 and the maximum is 5 GiB.
|
|||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 200M
|
||||
- Default: 200Mi
|
||||
|
||||
#### --qingstor-chunk-size
|
||||
|
||||
|
@ -250,7 +249,7 @@ enough memory, then increasing this will speed up the transfers.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 4M
|
||||
- Default: 4Mi
|
||||
|
||||
#### --qingstor-upload-concurrency
|
||||
|
||||
|
|
|
@ -525,8 +525,14 @@ This takes the following parameters
|
|||
- name - name of remote
|
||||
- parameters - a map of \{ "key": "value" \} pairs
|
||||
- type - type of the new remote
|
||||
- obscure - optional bool - forces obscuring of passwords
|
||||
- noObscure - optional bool - forces passwords not to be obscured
|
||||
- opt - a dictionary of options to control the configuration
|
||||
- obscure - declare passwords are plain and need obscuring
|
||||
- noObscure - declare passwords are already obscured and don't need obscuring
|
||||
- nonInteractive - don't interact with a user, return questions
|
||||
- continue - continue the config process with an answer
|
||||
- all - ask all the config questions not just the post config ones
|
||||
- state - state to restart with - used with continue
|
||||
- result - result to restart with - used with continue
|
||||
|
||||
|
||||
See the [config create command](/commands/rclone_config_create/) command for more information on the above.
|
||||
|
@ -600,8 +606,14 @@ This takes the following parameters
|
|||
|
||||
- name - name of remote
|
||||
- parameters - a map of \{ "key": "value" \} pairs
|
||||
- obscure - optional bool - forces obscuring of passwords
|
||||
- noObscure - optional bool - forces passwords not to be obscured
|
||||
- opt - a dictionary of options to control the configuration
|
||||
- obscure - declare passwords are plain and need obscuring
|
||||
- noObscure - declare passwords are already obscured and don't need obscuring
|
||||
- nonInteractive - don't interact with a user, return questions
|
||||
- continue - continue the config process with an answer
|
||||
- all - ask all the config questions not just the post config ones
|
||||
- state - state to restart with - used with continue
|
||||
- result - result to restart with - used with continue
|
||||
|
||||
|
||||
See the [config update command](/commands/rclone_config_update/) command for more information on the above.
|
||||
|
@ -775,7 +787,7 @@ Returns the following values:
|
|||
"lastError": last error string,
|
||||
"renames" : number of files renamed,
|
||||
"retryError": boolean showing whether there has been at least one non-NoRetryError,
|
||||
"speed": average speed in bytes/sec since start of the group,
|
||||
"speed": average speed in bytes per second since start of the group,
|
||||
"totalBytes": total number of bytes in the group,
|
||||
"totalChecks": total number of checks in the group,
|
||||
"totalTransfers": total number of transfers in the group,
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Amazon S3"
|
|||
description: "Rclone docs for Amazon S3"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers
|
||||
--------------------------------------------------------
|
||||
# {{< icon "fab fa-amazon" >}} Amazon S3 Storage Providers
|
||||
|
||||
The S3 backend can be used with a number of different providers:
|
||||
|
||||
|
@ -894,6 +893,10 @@ Endpoint for OSS API.
|
|||
- Type: string
|
||||
- Default: ""
|
||||
- Examples:
|
||||
- "oss-accelerate.aliyuncs.com"
|
||||
- Global Accelerate
|
||||
- "oss-accelerate-overseas.aliyuncs.com"
|
||||
- Global Accelerate (outside mainland China)
|
||||
- "oss-cn-hangzhou.aliyuncs.com"
|
||||
- East China 1 (Hangzhou)
|
||||
- "oss-cn-shanghai.aliyuncs.com"
|
||||
|
@ -905,9 +908,17 @@ Endpoint for OSS API.
|
|||
- "oss-cn-zhangjiakou.aliyuncs.com"
|
||||
- North China 3 (Zhangjiakou)
|
||||
- "oss-cn-huhehaote.aliyuncs.com"
|
||||
- North China 5 (Huhehaote)
|
||||
- North China 5 (Hohhot)
|
||||
- "oss-cn-wulanchabu.aliyuncs.com"
|
||||
- North China 6 (Ulanqab)
|
||||
- "oss-cn-shenzhen.aliyuncs.com"
|
||||
- South China 1 (Shenzhen)
|
||||
- "oss-cn-heyuan.aliyuncs.com"
|
||||
- South China 2 (Heyuan)
|
||||
- "oss-cn-guangzhou.aliyuncs.com"
|
||||
- South China 3 (Guangzhou)
|
||||
- "oss-cn-chengdu.aliyuncs.com"
|
||||
- West China 1 (Chengdu)
|
||||
- "oss-cn-hongkong.aliyuncs.com"
|
||||
- Hong Kong (Hong Kong)
|
||||
- "oss-us-west-1.aliyuncs.com"
|
||||
|
@ -1029,6 +1040,8 @@ Required when using an S3 clone.
|
|||
- Digital Ocean Spaces Amsterdam 3
|
||||
- "sgp1.digitaloceanspaces.com"
|
||||
- Digital Ocean Spaces Singapore 1
|
||||
- "localhost:8333"
|
||||
- SeaweedFS S3 localhost
|
||||
- "s3.wasabisys.com"
|
||||
- Wasabi US East endpoint
|
||||
- "s3.us-west-1.wasabisys.com"
|
||||
|
@ -1334,7 +1347,7 @@ The storage class to use when storing new objects in S3.
|
|||
|
||||
### Advanced Options
|
||||
|
||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS).
|
||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
|
||||
|
||||
#### --s3-bucket-acl
|
||||
|
||||
|
@ -1420,7 +1433,7 @@ The minimum is 0 and the maximum is 5 GiB.
|
|||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 200M
|
||||
- Default: 200Mi
|
||||
|
||||
#### --s3-chunk-size
|
||||
|
||||
|
@ -1449,7 +1462,7 @@ larger files then you will need to increase chunk_size.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_S3_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 5M
|
||||
- Default: 5Mi
|
||||
|
||||
#### --s3-max-upload-parts
|
||||
|
||||
|
@ -1482,7 +1495,7 @@ The minimum is 0 and the maximum is 5 GiB.
|
|||
- Config: copy_cutoff
|
||||
- Env Var: RCLONE_S3_COPY_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 4.656G
|
||||
- Default: 4.656Gi
|
||||
|
||||
#### --s3-disable-checksum
|
||||
|
||||
|
@ -1684,6 +1697,15 @@ very small even with this flag.
|
|||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --s3-no-head-object
|
||||
|
||||
If set, don't HEAD objects
|
||||
|
||||
- Config: no_head_object
|
||||
- Env Var: RCLONE_S3_NO_HEAD_OBJECT
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --s3-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
|
@ -1884,7 +1906,7 @@ Then use it as normal with the name of the public bucket, e.g.
|
|||
|
||||
You will be able to list and copy data but not upload it.
|
||||
|
||||
### Ceph ###
|
||||
## Ceph
|
||||
|
||||
[Ceph](https://ceph.com/) is an open source unified, distributed
|
||||
storage system designed for excellent performance, reliability and
|
||||
|
@ -1940,7 +1962,7 @@ removed).
|
|||
Because this is a json dump, it is encoding the `/` as `\/`, so if you
|
||||
use the secret key as `xxxxxx/xxxx` it will work fine.
|
||||
|
||||
### Dreamhost ###
|
||||
## Dreamhost
|
||||
|
||||
Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
|
||||
an object storage system based on CEPH.
|
||||
|
@ -1964,7 +1986,7 @@ server_side_encryption =
|
|||
storage_class =
|
||||
```
|
||||
|
||||
### DigitalOcean Spaces ###
|
||||
## DigitalOcean Spaces
|
||||
|
||||
[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
|
||||
|
||||
|
@ -2010,7 +2032,7 @@ rclone mkdir spaces:my-new-space
|
|||
rclone copy /path/to/files spaces:my-new-space
|
||||
```
|
||||
|
||||
### IBM COS (S3) ###
|
||||
## IBM COS (S3)
|
||||
|
||||
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
|
||||
|
||||
|
@ -2182,7 +2204,7 @@ acl> 1
|
|||
rclone delete IBM-COS-XREGION:newbucket/file.txt
|
||||
```
|
||||
|
||||
### Minio ###
|
||||
## Minio
|
||||
|
||||
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
|
||||
|
||||
|
@ -2249,7 +2271,7 @@ So once set up, for example to copy files into a bucket
|
|||
rclone copy /path/to/files minio:bucket
|
||||
```
|
||||
|
||||
### Scaleway {#scaleway}
|
||||
## Scaleway
|
||||
|
||||
[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
|
||||
Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
|
||||
|
@ -2271,7 +2293,7 @@ server_side_encryption =
|
|||
storage_class =
|
||||
```
|
||||
|
||||
### SeaweedFS ###
|
||||
## SeaweedFS
|
||||
|
||||
[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for
|
||||
blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store.
|
||||
|
@ -2321,7 +2343,7 @@ So once set up, for example to copy files into a bucket
|
|||
rclone copy /path/to/files seaweedfs_s3:foo
|
||||
```
|
||||
|
||||
### Wasabi ###
|
||||
## Wasabi
|
||||
|
||||
[Wasabi](https://wasabi.com) is a cloud-based object storage service for a
|
||||
broad range of applications and use cases. Wasabi is designed for
|
||||
|
@ -2434,7 +2456,7 @@ server_side_encryption =
|
|||
storage_class =
|
||||
```
|
||||
|
||||
### Alibaba OSS {#alibaba-oss}
|
||||
## Alibaba OSS {#alibaba-oss}
|
||||
|
||||
Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
|
||||
configuration. First run:
|
||||
|
@ -2544,7 +2566,7 @@ d) Delete this remote
|
|||
y/e/d> y
|
||||
```
|
||||
|
||||
### Tencent COS {#tencent-cos}
|
||||
## Tencent COS {#tencent-cos}
|
||||
|
||||
[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
|
||||
|
||||
|
@ -2676,13 +2698,13 @@ Name Type
|
|||
cos s3
|
||||
```
|
||||
|
||||
### Netease NOS
|
||||
## Netease NOS
|
||||
|
||||
For Netease NOS configure as per the configurator `rclone config`
|
||||
setting the provider `Netease`. This will automatically set
|
||||
`force_path_style = false` which is necessary for it to run properly.
|
||||
|
||||
### Limitations
|
||||
## Limitations
|
||||
|
||||
`rclone about` is not supported by the S3 backend. Backends without
|
||||
this capability cannot determine free space for an rclone mount or
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Seafile"
|
|||
description: "Seafile"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-server" >}}Seafile
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-server" >}}Seafile
|
||||
|
||||
This is a backend for the [Seafile](https://www.seafile.com/) storage service:
|
||||
- It works with both the free community edition or the professional edition.
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "SFTP"
|
|||
description: "SFTP"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-server" >}} SFTP
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-server" >}} SFTP
|
||||
|
||||
SFTP is the [Secure (or SSH) File Transfer
|
||||
Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
|
||||
|
@ -531,6 +530,21 @@ If concurrent reads are disabled, the use_fstat option is ignored.
|
|||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --sftp-disable-concurrent-writes
|
||||
|
||||
If set don't use concurrent writes
|
||||
|
||||
Normally rclone uses concurrent writes to upload files. This improves
|
||||
the performance greatly, especially for distant servers.
|
||||
|
||||
This option disables concurrent writes should that be necessary.
|
||||
|
||||
|
||||
- Config: disable_concurrent_writes
|
||||
- Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_WRITES
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --sftp-idle-timeout
|
||||
|
||||
Max time before closing idle connections
|
||||
|
|
|
@ -3,7 +3,7 @@ title: "Citrix ShareFile"
|
|||
description: "Rclone docs for Citrix ShareFile"
|
||||
---
|
||||
|
||||
## {{< icon "fas fa-share-square" >}} Citrix ShareFile
|
||||
# {{< icon "fas fa-share-square" >}} Citrix ShareFile
|
||||
|
||||
[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business.
|
||||
|
||||
|
@ -191,7 +191,7 @@ Cutoff for switching to multipart upload.
|
|||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 128M
|
||||
- Default: 128Mi
|
||||
|
||||
#### --sharefile-chunk-size
|
||||
|
||||
|
@ -205,7 +205,7 @@ Reducing this will reduce memory usage but decrease performance.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 64M
|
||||
- Default: 64Mi
|
||||
|
||||
#### --sharefile-endpoint
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "SugarSync"
|
|||
description: "Rclone docs for SugarSync"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-dove" >}} SugarSync
|
||||
-----------------------------------------
|
||||
# {{< icon "fas fa-dove" >}} SugarSync
|
||||
|
||||
[SugarSync](https://sugarsync.com) is a cloud service that enables
|
||||
active synchronization of files across computers and other devices for
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Swift"
|
|||
description: "Swift"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-space-shuttle" >}}Swift
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-space-shuttle" >}}Swift
|
||||
|
||||
Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/).
|
||||
Commercial implementations of that being:
|
||||
|
@ -449,7 +448,7 @@ default for this is 5 GiB which is its maximum value.
|
|||
- Config: chunk_size
|
||||
- Env Var: RCLONE_SWIFT_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 5G
|
||||
- Default: 5Gi
|
||||
|
||||
#### --swift-no-chunk
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Tardigrade"
|
|||
description: "Rclone docs for Tardigrade"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-dove" >}} Tardigrade
|
||||
-----------------------------------------
|
||||
# {{< icon "fas fa-dove" >}} Tardigrade
|
||||
|
||||
[Tardigrade](https://tardigrade.io) is an encrypted, secure, and
|
||||
cost-effective object storage service that enables you to store, back up, and
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Union"
|
|||
description: "Remote Unification"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-link" >}} Union
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-link" >}} Union
|
||||
|
||||
The `union` remote provides a unification similar to UnionFS using other remotes.
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Uptobox"
|
|||
description: "Rclone docs for Uptobox"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-archive" >}} Uptobox
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-archive" >}} Uptobox
|
||||
|
||||
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional
|
||||
cloud storage provider and therefore not suitable for long term storage.
|
||||
|
@ -13,9 +12,9 @@ Paths are specified as `remote:path`
|
|||
|
||||
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
||||
|
||||
## Setup
|
||||
### Setup
|
||||
|
||||
To configure an Uptobox backend you'll need your personal api token. You'll find it in you
|
||||
To configure an Uptobox backend you'll need your personal api token. You'll find it in your
|
||||
[account settings](https://uptobox.com/my_account)
|
||||
|
||||
|
||||
|
@ -107,12 +106,12 @@ as they can't be used in XML strings.
|
|||
|
||||
Here are the standard options specific to uptobox (Uptobox).
|
||||
|
||||
#### --uptobox-api-key
|
||||
#### --uptobox-access-token
|
||||
|
||||
Your API Key, get it from https://uptobox.com/my_account
|
||||
Your access Token, get it from https://uptobox.com/my_account
|
||||
|
||||
- Config: api_key
|
||||
- Env Var: RCLONE_UPTOBOX_API_KEY
|
||||
- Config: access_token
|
||||
- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
|
||||
|
@ -129,7 +128,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
|
|||
- Config: encoding
|
||||
- Env Var: RCLONE_UPTOBOX_ENCODING
|
||||
- Type: MultiEncoder
|
||||
- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
|
||||
- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
|
||||
|
@ -138,4 +137,4 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
|
|||
Uptobox will delete inactive files that have not been accessed in 60 days.
|
||||
|
||||
`rclone about` is not supported by this backend an overview of used space can however
|
||||
been seen in the uptobox web interface.
|
||||
been seen in the uptobox web interface.
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "WebDAV"
|
|||
description: "Rclone docs for WebDAV"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-globe" >}} WebDAV
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-globe" >}} WebDAV
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Yandex"
|
|||
description: "Yandex Disk"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-space-shuttle" >}}Yandex Disk
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-space-shuttle" >}}Yandex Disk
|
||||
|
||||
[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com).
|
||||
|
||||
|
|
|
@ -3,8 +3,7 @@ title: "Zoho"
|
|||
description: "Zoho WorkDrive"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-folder" >}}Zoho Workdrive
|
||||
----------------------------------------
|
||||
# {{< icon "fas fa-folder" >}}Zoho Workdrive
|
||||
|
||||
[Zoho WorkDrive](https://www.zoho.com/workdrive/) is a cloud storage solution created by [Zoho](https://zoho.com).
|
||||
|
||||
|
@ -150,7 +149,11 @@ Leave blank normally.
|
|||
|
||||
#### --zoho-region
|
||||
|
||||
Zoho region to connect to. You'll have to use the region you organization is registered in.
|
||||
Zoho region to connect to.
|
||||
|
||||
You'll have to use the region your organization is registered in. If
|
||||
not sure use the same top level domain as you connect to in your
|
||||
browser.
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_ZOHO_REGION
|
||||
|
|
Loading…
Reference in New Issue
Block a user