mirror of
https://github.com/rclone/rclone.git
synced 2024-11-22 13:26:11 +08:00
Version v1.60.0
This commit is contained in:
parent
afa61e702c
commit
01dbbff62e
2343
MANUAL.html
generated
2343
MANUAL.html
generated
File diff suppressed because it is too large
Load Diff
2676
MANUAL.txt
generated
2676
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
|
@ -5,6 +5,82 @@ description: "Rclone Changelog"
|
|||
|
||||
# Changelog
|
||||
|
||||
## v1.60.0 - 2022-10-21
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.60.0)
|
||||
|
||||
* New backends
|
||||
* [Oracle object storage](/oracleobjectstorage/) (Manoj Ghosh)
|
||||
* [SMB](/smb/) / CIFS (Windows file sharing) (Lesmiscore)
|
||||
* New S3 providers
|
||||
* [IONOS Cloud Storage](/s3/#ionos) (Dmitry Deniskin)
|
||||
* [Qiniu KODO](/s3/#qiniu) (Bachue Zhou)
|
||||
* New Features
|
||||
* build
|
||||
* Update to go1.19 and make go1.17 the minimum required version (Nick Craig-Wood)
|
||||
* Install.sh: fix arm-v7 download (Ole Frost)
|
||||
* fs: Warn the user when using an existing remote name without a colon (Nick Craig-Wood)
|
||||
* httplib: Add `--xxx-min-tls-version` option to select minimum TLS version for HTTP servers (Robert Newson)
|
||||
* librclone: Add PHP bindings and test program (Jordi Gonzalez Muñoz)
|
||||
* operations
|
||||
* Add `--server-side-across-configs` global flag for any backend (Nick Craig-Wood)
|
||||
* Optimise `--copy-dest` and `--compare-dest` (Nick Craig-Wood)
|
||||
* rc: add `job/stopgroup` to stop group (Evan Spensley)
|
||||
* serve dlna
|
||||
* Add `--announce-interval` to control SSDP Announce Interval (YanceyChiew)
|
||||
* Add `--interface` to Specify SSDP interface names line (Simon Bos)
|
||||
* Add support for more external subtitles (YanceyChiew)
|
||||
* Add verification of addresses (YanceyChiew)
|
||||
* sync: Optimise `--copy-dest` and `--compare-dest` (Nick Craig-Wood)
|
||||
* doc updates (albertony, Alexander Knorr, anonion, João Henrique Franco, Josh Soref, Lorenzo Milesi, Marco Molteni, Mark Trolley, Ole Frost, partev, Ryan Morey, Tom Mombourquette, YFdyh000)
|
||||
* Bug Fixes
|
||||
* filter
|
||||
* Fix incorrect filtering with `UseFilter` context flag and wrapping backends (Nick Craig-Wood)
|
||||
* Make sure we check `--files-from` when looking for a single file (Nick Craig-Wood)
|
||||
* rc
|
||||
* Fix `mount/listmounts` not returning the full Fs entered in `mount/mount` (Tom Mombourquette)
|
||||
* Handle external unmount when mounting (Isaac Aymerich)
|
||||
* Validate Daemon option is not set when mounting a volume via RC (Isaac Aymerich)
|
||||
* sync: Update docs and error messages to reflect fixes to overlap checks (Nick Naumann)
|
||||
* VFS
|
||||
* Reduce memory use by embedding `sync.Cond` (Nick Craig-Wood)
|
||||
* Reduce memory usage by re-ordering commonly used structures (Nick Craig-Wood)
|
||||
* Fix excess CPU used by VFS cache cleaner looping (Nick Craig-Wood)
|
||||
* Local
|
||||
* Obey file filters in listing to fix errors on excluded files (Nick Craig-Wood)
|
||||
* Fix "Failed to read metadata: function not implemented" on old Linux kernels (Nick Craig-Wood)
|
||||
* Compress
|
||||
* Fix crash due to nil metadata (Nick Craig-Wood)
|
||||
* Fix error handling to not use or return nil objects (Nick Craig-Wood)
|
||||
* Drive
|
||||
* Make `--drive-stop-on-upload-limit` obey quota exceeded error (Steve Kowalik)
|
||||
* FTP
|
||||
* Add `--ftp-force-list-hidden` option to show hidden items (Øyvind Heddeland Instefjord)
|
||||
* Fix hang when using ExplicitTLS to certain servers. (Nick Craig-Wood)
|
||||
* Google Cloud Storage
|
||||
* Add `--gcs-endpoint` flag and config parameter (Nick Craig-Wood)
|
||||
* Hubic
|
||||
* Remove backend as service has now shut down (Nick Craig-Wood)
|
||||
* Onedrive
|
||||
* Rename Onedrive(cn) 21Vianet to Vnet Group (Yen Hu)
|
||||
* Disable change notify in China region since it is not supported (Nick Craig-Wood)
|
||||
* S3
|
||||
* Implement `--s3-versions` flag to show old versions of objects if enabled (Nick Craig-Wood)
|
||||
* Implement `--s3-version-at` flag to show versions of objects at a particular time (Nick Craig-Wood)
|
||||
* Implement `backend versioning` command to get/set bucket versioning (Nick Craig-Wood)
|
||||
* Implement `Purge` to purge versions and `backend cleanup-hidden` (Nick Craig-Wood)
|
||||
* Add `--s3-decompress` flag to decompress gzip-encoded files (Nick Craig-Wood)
|
||||
* Add `--s3-sse-customer-key-base64` to supply keys with binary data (Richard Bateman)
|
||||
* Try to keep the maximum precision in ModTime with `--user-server-modtime` (Nick Craig-Wood)
|
||||
* Drop binary metadata with an ERROR message as it can't be stored (Nick Craig-Wood)
|
||||
* Add `--s3-no-system-metadata` to suppress read and write of system metadata (Nick Craig-Wood)
|
||||
* SFTP
|
||||
* Fix directory creation races (Lesmiscore)
|
||||
* Swift
|
||||
* Add `--swift-no-large-objects` to reduce HEAD requests (Nick Craig-Wood)
|
||||
* Union
|
||||
* Propagate SlowHash feature to fix hasher interaction (Lesmiscore)
|
||||
|
||||
## v1.59.2 - 2022-09-15
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
|
||||
|
|
|
@ -37,7 +37,7 @@ See the [global flags page](/flags/) for global options not listed here.
|
|||
* [rclone about](/commands/rclone_about/) - Get quota information from the remote.
|
||||
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
|
||||
* [rclone backend](/commands/rclone_backend/) - Run a backend-specific command.
|
||||
* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectonal synchronization between two paths.
|
||||
* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths.
|
||||
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
|
||||
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
|
||||
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
|
||||
|
|
|
@ -1,17 +1,17 @@
|
|||
---
|
||||
title: "rclone bisync"
|
||||
description: "Perform bidirectonal synchronization between two paths."
|
||||
description: "Perform bidirectional synchronization between two paths."
|
||||
slug: rclone_bisync
|
||||
url: /commands/rclone_bisync/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/bisync/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone bisync
|
||||
|
||||
Perform bidirectonal synchronization between two paths.
|
||||
Perform bidirectional synchronization between two paths.
|
||||
|
||||
## Synopsis
|
||||
|
||||
Perform bidirectonal synchronization between two paths.
|
||||
Perform bidirectional synchronization between two paths.
|
||||
|
||||
[Bisync](https://rclone.org/bisync/) provides a
|
||||
bidirectional cloud sync solution in rclone.
|
||||
|
|
|
@ -28,7 +28,7 @@ To load completions for every new session, execute once:
|
|||
|
||||
### macOS:
|
||||
|
||||
rclone completion bash > /usr/local/etc/bash_completion.d/rclone
|
||||
rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
|
||||
|
||||
You will need to start a new shell for this setup to take effect.
|
||||
|
||||
|
|
|
@ -18,6 +18,10 @@ to enable it. You can execute the following once:
|
|||
|
||||
echo "autoload -U compinit; compinit" >> ~/.zshrc
|
||||
|
||||
To load completions in your current shell session:
|
||||
|
||||
source <(rclone completion zsh); compdef _rclone rclone
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
|
||||
### Linux:
|
||||
|
@ -26,7 +30,7 @@ To load completions for every new session, execute once:
|
|||
|
||||
### macOS:
|
||||
|
||||
rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
|
||||
rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
|
||||
|
||||
You will need to start a new shell for this setup to take effect.
|
||||
|
||||
|
|
|
@ -45,7 +45,7 @@ are 100% certain you are already passing obscured passwords then use
|
|||
`rclone config password` command.
|
||||
|
||||
The flag `--non-interactive` is for use by applications that wish to
|
||||
configure rclone themeselves, rather than using rclone's text based
|
||||
configure rclone themselves, rather than using rclone's text based
|
||||
configuration questions. If this flag is set, and rclone needs to ask
|
||||
the user a question, a JSON blob will be returned with the question in
|
||||
it.
|
||||
|
|
|
@ -45,7 +45,7 @@ are 100% certain you are already passing obscured passwords then use
|
|||
`rclone config password` command.
|
||||
|
||||
The flag `--non-interactive` is for use by applications that wish to
|
||||
configure rclone themeselves, rather than using rclone's text based
|
||||
configure rclone themselves, rather than using rclone's text based
|
||||
configuration questions. If this flag is set, and rclone needs to ask
|
||||
the user a question, a JSON blob will be returned with the question in
|
||||
it.
|
||||
|
|
|
@ -26,7 +26,7 @@ For the MD5 and SHA1 algorithms there are also dedicated commands,
|
|||
|
||||
This command can also hash data received on standard input (stdin),
|
||||
by not passing a remote:path, or by passing a hyphen as remote:path
|
||||
when there is data to read (if not, the hypen will be treated literaly,
|
||||
when there is data to read (if not, the hyphen will be treated literally,
|
||||
as a relative path).
|
||||
|
||||
Run without a hash to see the list of all supported hashes, e.g.
|
||||
|
|
|
@ -42,7 +42,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
|||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non-existent directory will produce an error except for
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
|||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non-existent directory will produce an error except for
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
|
|
|
@ -126,7 +126,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
|||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non-existent directory will produce an error except for
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
|
|
|
@ -56,7 +56,7 @@ If `--files-only` is not specified directories in addition to the files
|
|||
will be returned.
|
||||
|
||||
If `--metadata` is set then an additional Metadata key will be returned.
|
||||
This will have metdata in rclone standard format as a JSON object.
|
||||
This will have metadata in rclone standard format as a JSON object.
|
||||
|
||||
if `--stat` is set then a single JSON blob will be returned about the
|
||||
item pointed to. This will return an error if the item isn't found.
|
||||
|
@ -102,7 +102,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
|||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non-existent directory will produce an error except for
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
|||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non-existent directory will produce an error except for
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ to running `rclone hashsum MD5 remote:path`.
|
|||
|
||||
This command can also hash data received on standard input (stdin),
|
||||
by not passing a remote:path, or by passing a hyphen as remote:path
|
||||
when there is data to read (if not, the hypen will be treated literaly,
|
||||
when there is data to read (if not, the hyphen will be treated literally,
|
||||
as a relative path).
|
||||
|
||||
|
||||
|
|
|
@ -98,7 +98,7 @@ and experience unexpected program errors, freezes or other issues, consider moun
|
|||
as a network drive instead.
|
||||
|
||||
When mounting as a fixed disk drive you can either mount to an unused drive letter,
|
||||
or to a path representing a **non-existent** subdirectory of an **existing** parent
|
||||
or to a path representing a **nonexistent** subdirectory of an **existing** parent
|
||||
directory or drive. Using the special value `*` will tell rclone to
|
||||
automatically assign the next available drive letter, starting with Z: and moving backward.
|
||||
Examples:
|
||||
|
@ -129,7 +129,7 @@ the mapped drive, shown in Windows Explorer etc, while the complete
|
|||
`\\server\share` will be reported as the remote UNC path by
|
||||
`net use` etc, just like a normal network drive mapping.
|
||||
|
||||
If you specify a full network share UNC path with `--volname`, this will implicitely
|
||||
If you specify a full network share UNC path with `--volname`, this will implicitly
|
||||
set the `--network-mode` option, so the following two examples have same result:
|
||||
|
||||
rclone mount remote:path/to/files X: --network-mode
|
||||
|
@ -138,7 +138,7 @@ set the `--network-mode` option, so the following two examples have same result:
|
|||
You may also specify the network share UNC path as the mountpoint itself. Then rclone
|
||||
will automatically assign a drive letter, same as with `*` and use that as
|
||||
mountpoint, and instead use the UNC path specified as the volume name, as if it were
|
||||
specified with the `--volname` option. This will also implicitely set
|
||||
specified with the `--volname` option. This will also implicitly set
|
||||
the `--network-mode` option. This means the following two examples have same result:
|
||||
|
||||
rclone mount remote:path/to/files \\cloud\remote
|
||||
|
@ -174,7 +174,7 @@ The permissions on each entry will be set according to [options](#options)
|
|||
|
||||
The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`,
|
||||
i.e. read and write permissions to everyone. This means you will not be able
|
||||
to start any programs from the the mount. To be able to do that you must add
|
||||
to start any programs from the mount. To be able to do that you must add
|
||||
execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it
|
||||
to everyone. If the program needs to write files, chances are you will have
|
||||
to enable [VFS File Caching](#vfs-file-caching) as well (see also [limitations](#limitations)).
|
||||
|
@ -245,8 +245,8 @@ applications won't work with their files on an rclone mount without
|
|||
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
|
||||
See the [VFS File Caching](#vfs-file-caching) section for more info.
|
||||
|
||||
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2,
|
||||
Hubic) do not support the concept of empty directories, so empty
|
||||
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2)
|
||||
do not support the concept of empty directories, so empty
|
||||
directories will have a tendency to disappear once they fall out of
|
||||
the directory cache.
|
||||
|
||||
|
@ -341,6 +341,8 @@ mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/p
|
|||
or create systemd mount units:
|
||||
```
|
||||
# /etc/systemd/system/mnt-data.mount
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
[Mount]
|
||||
Type=rclone
|
||||
What=sftp1:subdir
|
||||
|
@ -352,6 +354,7 @@ optionally accompanied by systemd automount unit
|
|||
```
|
||||
# /etc/systemd/system/mnt-data.automount
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Before=remote-fs.target
|
||||
[Automount]
|
||||
Where=/mnt/data
|
||||
|
|
|
@ -45,7 +45,7 @@ press '?' to toggle the help on and off. The supported keys are:
|
|||
q/ESC/^c to quit
|
||||
|
||||
Listed files/directories may be prefixed by a one-character flag,
|
||||
some of them combined with a description in brackes at end of line.
|
||||
some of them combined with a description in brackets at end of line.
|
||||
These flags have the following meaning:
|
||||
|
||||
e means this is an empty directory, i.e. contains no files (but
|
||||
|
|
|
@ -32,11 +32,6 @@ IPs.
|
|||
Use `--name` to choose the friendly server name, which is by
|
||||
default "rclone (hostname)".
|
||||
|
||||
Use `--announce-interval` to specify the interval at which SSDP server
|
||||
announce devices and services. Larger active announcement intervals help
|
||||
keep the multicast domain clean, this value does not affect unicast
|
||||
responses to `M-SEARCH` requests from other devices.
|
||||
|
||||
Use `--log-trace` in conjunction with `-vv` to enable additional debug
|
||||
logging of all UPNP traffic.
|
||||
|
||||
|
@ -367,11 +362,13 @@ rclone serve dlna remote:path [flags]
|
|||
|
||||
```
|
||||
--addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
|
||||
--announce-interval duration The interval between SSDP announcements (default 12m0s)
|
||||
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
|
||||
--dir-perms FileMode Directory permissions (default 0777)
|
||||
--file-perms FileMode File permissions (default 0666)
|
||||
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
-h, --help help for dlna
|
||||
--interface stringArray The interface to use for SSDP (repeat as necessary)
|
||||
--log-trace Enable trace logging of SOAP traffic
|
||||
--name string Name of DLNA server
|
||||
--no-checksum Don't compare checksums on up/download
|
||||
|
|
|
@ -60,6 +60,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
|
|||
private key and `--client-ca` should be the PEM encoded client
|
||||
certificate authority certificate.
|
||||
|
||||
--min-tls-version is minimum TLS version that is acceptable. Valid
|
||||
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
|
||||
"tls1.0").
|
||||
|
||||
### Template
|
||||
|
||||
`--template` allows a user to specify a custom markup template for HTTP
|
||||
|
@ -446,6 +450,7 @@ rclone serve http remote:path [flags]
|
|||
--htpasswd string A htpasswd file - if not provided no authentication is done
|
||||
--key string SSL PEM Private key
|
||||
--max-header-bytes int Maximum size of request header (default 4096)
|
||||
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
|
||||
--no-checksum Don't compare checksums on up/download
|
||||
--no-modtime Don't read/write the modification time (can speed things up)
|
||||
--no-seek Don't allow seeking in files
|
||||
|
|
|
@ -174,6 +174,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
|
|||
private key and `--client-ca` should be the PEM encoded client
|
||||
certificate authority certificate.
|
||||
|
||||
--min-tls-version is minimum TLS version that is acceptable. Valid
|
||||
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
|
||||
"tls1.0").
|
||||
|
||||
|
||||
```
|
||||
rclone serve restic remote:path [flags]
|
||||
|
@ -192,6 +196,7 @@ rclone serve restic remote:path [flags]
|
|||
--htpasswd string htpasswd file - if not provided no authentication is done
|
||||
--key string SSL PEM Private key
|
||||
--max-header-bytes int Maximum size of request header (default 4096)
|
||||
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
|
||||
--pass string Password for authentication
|
||||
--private-repos Users can only access their private repo
|
||||
--realm string Realm for authentication (default "rclone")
|
||||
|
|
|
@ -11,11 +11,19 @@ Serve the remote over SFTP.
|
|||
|
||||
## Synopsis
|
||||
|
||||
Run a SFTP server to serve a remote over SFTP. This can be used
|
||||
with an SFTP client or you can make a remote of type sftp to use with it.
|
||||
Run an SFTP server to serve a remote over SFTP. This can be used
|
||||
with an SFTP client or you can make a remote of type [sftp](/sftp) to use with it.
|
||||
|
||||
You can use the filter flags (e.g. `--include`, `--exclude`) to control what
|
||||
is served.
|
||||
You can use the [filter](/filtering) flags (e.g. `--include`, `--exclude`)
|
||||
to control what is served.
|
||||
|
||||
The server will respond to a small number of shell commands, mainly
|
||||
md5sum, sha1sum and df, which enable it to provide support for checksums
|
||||
and the about feature when accessed from an sftp remote.
|
||||
|
||||
Note that this server uses standard 32 KiB packet payload size, which
|
||||
means you must not configure the client to expect anything else, e.g.
|
||||
with the [chunk_size](/sftp/#sftp-chunk-size) option on an sftp remote.
|
||||
|
||||
The server will log errors. Use `-v` to see access logs.
|
||||
|
||||
|
@ -28,11 +36,6 @@ You must provide some means of authentication, either with
|
|||
`--auth-proxy`, or set the `--no-auth` flag for no
|
||||
authentication when logging in.
|
||||
|
||||
Note that this also implements a small number of shell commands so
|
||||
that it can provide md5sum/sha1sum/df information for the rclone sftp
|
||||
backend. This means that is can support SHA1SUMs, MD5SUMs and the
|
||||
about command when paired with the rclone sftp backend.
|
||||
|
||||
If you don't supply a host `--key` then rclone will generate rsa, ecdsa
|
||||
and ed25519 variants, and cache them for later use in rclone's cache
|
||||
directory (see `rclone help flags cache-dir`) in the "serve-sftp"
|
||||
|
@ -484,7 +487,7 @@ rclone serve sftp remote:path [flags]
|
|||
--pass string Password for authentication
|
||||
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
|
||||
--read-only Only allow read-only access
|
||||
--stdio Run an sftp server on run stdin/stdout
|
||||
--stdio Run an sftp server on stdin/stdout
|
||||
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
|
||||
--user string User name for authentication
|
||||
|
|
|
@ -109,6 +109,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
|
|||
private key and `--client-ca` should be the PEM encoded client
|
||||
certificate authority certificate.
|
||||
|
||||
--min-tls-version is minimum TLS version that is acceptable. Valid
|
||||
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
|
||||
"tls1.0").
|
||||
|
||||
## VFS - Virtual File System
|
||||
|
||||
This command uses the VFS layer. This adapts the cloud storage objects
|
||||
|
@ -531,6 +535,7 @@ rclone serve webdav remote:path [flags]
|
|||
--htpasswd string htpasswd file - if not provided no authentication is done
|
||||
--key string SSL PEM Private key
|
||||
--max-header-bytes int Maximum size of request header (default 4096)
|
||||
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
|
||||
--no-checksum Don't compare checksums on up/download
|
||||
--no-modtime Don't read/write the modification time (can speed things up)
|
||||
--no-seek Don't allow seeking in files
|
||||
|
|
|
@ -26,7 +26,7 @@ to running `rclone hashsum SHA1 remote:path`.
|
|||
|
||||
This command can also hash data received on standard input (stdin),
|
||||
by not passing a remote:path, or by passing a hyphen as remote:path
|
||||
when there is data to read (if not, the hypen will be treated literaly,
|
||||
when there is data to read (if not, the hyphen will be treated literally,
|
||||
as a relative path).
|
||||
|
||||
This command can also hash data received on STDIN, if not passing
|
||||
|
|
|
@ -37,6 +37,11 @@ extended explanation in the [copy](/commands/rclone_copy/) command if unsure.
|
|||
If dest:path doesn't exist, it is created and the source:path contents
|
||||
go there.
|
||||
|
||||
It is not possible to sync overlapping remotes. However, you may exclude
|
||||
the destination from the sync with a filter rule or by putting an
|
||||
exclude-if-present file inside the destination directory and sync to a
|
||||
destination that is inside the source directory.
|
||||
|
||||
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
|
||||
|
||||
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
|
||||
|
|
|
@ -119,7 +119,7 @@ These flags are available for every command.
|
|||
--rc-job-expire-interval duration Interval to check for expired async jobs (default 10s)
|
||||
--rc-key string SSL PEM Private key
|
||||
--rc-max-header-bytes int Maximum size of request header (default 4096)
|
||||
--rc-min-tls-version string Minimum TLS version that is acceptable
|
||||
--rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
|
||||
--rc-no-auth Don't require auth for certain methods
|
||||
--rc-pass string Password for authentication
|
||||
--rc-realm string Realm for authentication (default "rclone")
|
||||
|
@ -136,6 +136,7 @@ These flags are available for every command.
|
|||
--refresh-times Refresh the modtime of remote files
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
|
||||
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
|
||||
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
|
||||
|
@ -161,7 +162,7 @@ These flags are available for every command.
|
|||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.60.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
```
|
||||
|
||||
|
@ -348,6 +349,7 @@ and may be set in the config file.
|
|||
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
|
||||
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
|
||||
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
|
||||
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
|
||||
--ftp-host string FTP host to connect to
|
||||
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
|
||||
--ftp-no-check-certificate Do not verify the TLS certificate of the server
|
||||
|
@ -358,7 +360,6 @@ and may be set in the config file.
|
|||
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
|
||||
--ftp-user string FTP username (default "$USER")
|
||||
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
|
||||
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD.
|
||||
--gcs-anonymous Access public buckets and objects without credentials
|
||||
--gcs-auth-url string Auth server URL
|
||||
--gcs-bucket-acl string Access Control List for new buckets
|
||||
|
@ -367,6 +368,7 @@ and may be set in the config file.
|
|||
--gcs-client-secret string OAuth Client Secret
|
||||
--gcs-decompress If set this will decompress gzip encoded objects
|
||||
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
--gcs-object-acl string Access Control List for new objects
|
||||
|
@ -412,14 +414,6 @@ and may be set in the config file.
|
|||
--http-no-head Don't use HEAD requests
|
||||
--http-no-slash Set this if the site doesn't end directories with /
|
||||
--http-url string URL of HTTP host to connect to
|
||||
--hubic-auth-url string Auth server URL
|
||||
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
|
||||
--hubic-client-id string OAuth Client Id
|
||||
--hubic-client-secret string OAuth Client Secret
|
||||
--hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
|
||||
--hubic-no-chunk Don't chunk files during streaming upload
|
||||
--hubic-token string OAuth Access Token as a JSON blob
|
||||
--hubic-token-url string Token server url
|
||||
--internetarchive-access-key-id string IAS3 Access Key
|
||||
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
|
||||
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
|
||||
|
@ -535,6 +529,7 @@ and may be set in the config file.
|
|||
--s3-bucket-acl string Canned ACL used when creating buckets
|
||||
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
|
||||
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
|
||||
--s3-decompress If set this will decompress gzip encoded objects
|
||||
--s3-disable-checksum Don't store MD5 checksum with object metadata
|
||||
--s3-disable-http2 Disable usage of http2 for S3 backends
|
||||
--s3-download-url string Custom endpoint for downloads
|
||||
|
@ -553,6 +548,7 @@ and may be set in the config file.
|
|||
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
--s3-no-head If set, don't HEAD uploaded objects to check integrity
|
||||
--s3-no-head-object If set, do not do HEAD before GET when getting objects
|
||||
--s3-no-system-metadata Suppress setting and reading of system metadata
|
||||
--s3-profile string Profile to use in the shared credentials file
|
||||
--s3-provider string Choose your S3 provider
|
||||
--s3-region string Region to connect to
|
||||
|
@ -562,7 +558,8 @@ and may be set in the config file.
|
|||
--s3-session-token string An AWS session token
|
||||
--s3-shared-credentials-file string Path to the shared credentials file
|
||||
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
|
||||
--s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
|
||||
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
|
||||
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
|
||||
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
|
||||
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
|
||||
--s3-storage-class string The storage class to use when storing new objects in S3
|
||||
|
@ -572,6 +569,8 @@ and may be set in the config file.
|
|||
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
|
||||
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
|
||||
--s3-v2-auth If true use v2 authentication
|
||||
--s3-version-at Time Show file versions as they were at the specified time (default off)
|
||||
--s3-versions Include old versions in directory listings
|
||||
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
|
||||
--seafile-create-library Should rclone create a library if it doesn't exist
|
||||
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
|
||||
|
@ -618,6 +617,15 @@ and may be set in the config file.
|
|||
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
|
||||
--sia-user-agent string Siad User Agent (default "Sia-Agent")
|
||||
--skip-links Don't warn about skipped symlinks
|
||||
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
|
||||
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
|
||||
--smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
|
||||
--smb-host string SMB server hostname to connect to
|
||||
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
|
||||
--smb-pass string SMB password (obscured)
|
||||
--smb-port int SMB port number (default 445)
|
||||
--smb-user string SMB username (default "$USER")
|
||||
--storj-access-grant string Access grant
|
||||
--storj-api-key string API key
|
||||
--storj-passphrase string Encryption passphrase
|
||||
|
@ -648,6 +656,7 @@ and may be set in the config file.
|
|||
--swift-key string API key or password (OS_PASSWORD)
|
||||
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
|
||||
--swift-no-chunk Don't chunk files during streaming upload
|
||||
--swift-no-large-objects Disable support for static and dynamic large objects
|
||||
--swift-region string Region name - optional (OS_REGION_NAME)
|
||||
--swift-storage-policy string The storage policy to use when creating a new container
|
||||
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
|
||||
|
|
|
@ -248,6 +248,20 @@ Here are the Advanced options specific to ftp (FTP).
|
|||
|
||||
Maximum number of FTP simultaneous connections, 0 for unlimited.
|
||||
|
||||
Note that setting this is very likely to cause deadlocks so it should
|
||||
be used with care.
|
||||
|
||||
If you are doing a sync or copy then make sure concurrency is one more
|
||||
than the sum of `--transfers` and `--checkers`.
|
||||
|
||||
If you use `--check-first` then it just needs to be one more than the
|
||||
maximum of `--checkers` and `--transfers`.
|
||||
|
||||
So for `concurrency 3` you'd use `--checkers 2 --transfers 2
|
||||
--check-first` or `--checkers 1 --transfers 1`.
|
||||
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: concurrency
|
||||
|
|
|
@ -621,6 +621,19 @@ Properties:
|
|||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --gcs-endpoint
|
||||
|
||||
Endpoint for the service.
|
||||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_GCS_ENDPOINT
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --gcs-encoding
|
||||
|
||||
The encoding for the backend.
|
||||
|
|
|
@ -982,6 +982,12 @@ Parameters:
|
|||
|
||||
- jobid - id of the job (integer).
|
||||
|
||||
### job/stopgroup: Stop all running jobs in a group {#job-stopgroup}
|
||||
|
||||
Parameters:
|
||||
|
||||
- group - name of the group (string).
|
||||
|
||||
### mount/listmounts: Show current mount points {#mount-listmounts}
|
||||
|
||||
This shows currently mounted points, which can be used for performing an unmount.
|
||||
|
@ -1057,9 +1063,11 @@ Example:
|
|||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### mount/unmountall: Show current mount points {#mount-unmountall}
|
||||
### mount/unmountall: Unmount all active mounts {#mount-unmountall}
|
||||
|
||||
This shows currently mounted points, which can be used for performing an unmount.
|
||||
rclone allows Linux, FreeBSD, macOS and Windows to
|
||||
mount any of Rclone's cloud storage systems as a file system with
|
||||
FUSE.
|
||||
|
||||
This takes no parameters and returns error if unmount does not succeed.
|
||||
|
||||
|
|
|
@ -641,7 +641,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
|
|||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
|
||||
### Standard options
|
||||
|
||||
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
|
||||
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
|
||||
|
||||
#### --s3-provider
|
||||
|
||||
|
@ -676,6 +676,8 @@ Properties:
|
|||
- IBM COS S3
|
||||
- "IDrive"
|
||||
- IDrive e2
|
||||
- "IONOS"
|
||||
- IONOS Cloud
|
||||
- "LyveCloud"
|
||||
- Seagate Lyve Cloud
|
||||
- "Minio"
|
||||
|
@ -696,6 +698,8 @@ Properties:
|
|||
- Tencent Cloud Object Storage (COS)
|
||||
- "Wasabi"
|
||||
- Wasabi Object Storage
|
||||
- "Qiniu"
|
||||
- Qiniu Object Storage (Kodo)
|
||||
- "Other"
|
||||
- Any other S3 compatible provider
|
||||
|
||||
|
@ -966,13 +970,68 @@ Properties:
|
|||
|
||||
Region to connect to.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: Qiniu
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "cn-east-1"
|
||||
- The default endpoint - a good choice if you are unsure.
|
||||
- East China Region 1.
|
||||
- Needs location constraint cn-east-1.
|
||||
- "cn-east-2"
|
||||
- East China Region 2.
|
||||
- Needs location constraint cn-east-2.
|
||||
- "cn-north-1"
|
||||
- North China Region 1.
|
||||
- Needs location constraint cn-north-1.
|
||||
- "cn-south-1"
|
||||
- South China Region 1.
|
||||
- Needs location constraint cn-south-1.
|
||||
- "us-north-1"
|
||||
- North America Region.
|
||||
- Needs location constraint us-north-1.
|
||||
- "ap-southeast-1"
|
||||
- Southeast Asia Region 1.
|
||||
- Needs location constraint ap-southeast-1.
|
||||
- "ap-northeast-1"
|
||||
- Northeast Asia Region 1.
|
||||
- Needs location constraint ap-northeast-1.
|
||||
|
||||
#### --s3-region
|
||||
|
||||
Region where your bucket will be created and your data stored.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: IONOS
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "de"
|
||||
- Frankfurt, Germany
|
||||
- "eu-central-2"
|
||||
- Berlin, Germany
|
||||
- "eu-south-2"
|
||||
- Logrono, Spain
|
||||
|
||||
#### --s3-region
|
||||
|
||||
Region to connect to.
|
||||
|
||||
Leave blank if you are using an S3 clone and you don't have a region.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
|
||||
- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
|
@ -1230,6 +1289,27 @@ Properties:
|
|||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for IONOS S3 Object Storage.
|
||||
|
||||
Specify the endpoint from the same region.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: IONOS
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "s3-eu-central-1.ionoscloud.com"
|
||||
- Frankfurt, Germany
|
||||
- "s3-eu-central-2.ionoscloud.com"
|
||||
- Berlin, Germany
|
||||
- "s3-eu-south-2.ionoscloud.com"
|
||||
- Logrono, Spain
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for OSS API.
|
||||
|
||||
Properties:
|
||||
|
@ -1495,6 +1575,33 @@ Properties:
|
|||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for Qiniu Object Storage.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: Qiniu
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "s3-cn-east-1.qiniucs.com"
|
||||
- East China Endpoint 1
|
||||
- "s3-cn-east-2.qiniucs.com"
|
||||
- East China Endpoint 2
|
||||
- "s3-cn-north-1.qiniucs.com"
|
||||
- North China Endpoint 1
|
||||
- "s3-cn-south-1.qiniucs.com"
|
||||
- South China Endpoint 1
|
||||
- "s3-us-north-1.qiniucs.com"
|
||||
- North America Endpoint 1
|
||||
- "s3-ap-southeast-1.qiniucs.com"
|
||||
- Southeast Asia Endpoint 1
|
||||
- "s3-ap-northeast-1.qiniucs.com"
|
||||
- Northeast Asia Endpoint 1
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for S3 API.
|
||||
|
||||
Required when using an S3 clone.
|
||||
|
@ -1503,7 +1610,7 @@ Properties:
|
|||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp
|
||||
- Provider: !AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
|
@ -1830,13 +1937,42 @@ Properties:
|
|||
|
||||
Location constraint - must be set to match the Region.
|
||||
|
||||
Used when creating buckets only.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: location_constraint
|
||||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||||
- Provider: Qiniu
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "cn-east-1"
|
||||
- East China Region 1
|
||||
- "cn-east-2"
|
||||
- East China Region 2
|
||||
- "cn-north-1"
|
||||
- North China Region 1
|
||||
- "cn-south-1"
|
||||
- South China Region 1
|
||||
- "us-north-1"
|
||||
- North America Region 1
|
||||
- "ap-southeast-1"
|
||||
- Southeast Asia Region 1
|
||||
- "ap-northeast-1"
|
||||
- Northeast Asia Region 1
|
||||
|
||||
#### --s3-location-constraint
|
||||
|
||||
Location constraint - must be set to match the Region.
|
||||
|
||||
Leave blank if not sure. Used when creating buckets only.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: location_constraint
|
||||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||||
- Provider: !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS
|
||||
- Provider: !AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
|
@ -2066,9 +2202,30 @@ Properties:
|
|||
- Archived storage.
|
||||
- Prices are lower, but it needs to be restored first to be accessed.
|
||||
|
||||
#### --s3-storage-class
|
||||
|
||||
The storage class to use when storing new objects in Qiniu.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||
- Provider: Qiniu
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "STANDARD"
|
||||
- Standard storage class
|
||||
- "LINE"
|
||||
- Infrequent access storage mode
|
||||
- "GLACIER"
|
||||
- Archive storage mode
|
||||
- "DEEP_ARCHIVE"
|
||||
- Deep archive storage mode
|
||||
|
||||
### Advanced options
|
||||
|
||||
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
|
||||
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
|
||||
|
||||
#### --s3-bucket-acl
|
||||
|
||||
|
@ -2131,7 +2288,9 @@ Properties:
|
|||
|
||||
#### --s3-sse-customer-key
|
||||
|
||||
If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
|
||||
To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.
|
||||
|
||||
Alternatively you can provide --sse-customer-key-base64.
|
||||
|
||||
Properties:
|
||||
|
||||
|
@ -2144,6 +2303,23 @@ Properties:
|
|||
- ""
|
||||
- None
|
||||
|
||||
#### --s3-sse-customer-key-base64
|
||||
|
||||
If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.
|
||||
|
||||
Alternatively you can provide --sse-customer-key.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: sse_customer_key_base64
|
||||
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
|
||||
- Provider: AWS,Ceph,ChinaMobile,Minio
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- None
|
||||
|
||||
#### --s3-sse-customer-key-md5
|
||||
|
||||
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
|
||||
|
@ -2663,6 +2839,36 @@ Properties:
|
|||
- Type: Time
|
||||
- Default: off
|
||||
|
||||
#### --s3-decompress
|
||||
|
||||
If set this will decompress gzip encoded objects.
|
||||
|
||||
It is possible to upload objects to S3 with "Content-Encoding: gzip"
|
||||
set. Normally rclone will download these files as compressed objects.
|
||||
|
||||
If this flag is set then rclone will decompress these files with
|
||||
"Content-Encoding: gzip" as they are received. This means that rclone
|
||||
can't check the size and hash but the file contents will be decompressed.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: decompress
|
||||
- Env Var: RCLONE_S3_DECOMPRESS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --s3-no-system-metadata
|
||||
|
||||
Suppress setting and reading of system metadata
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_system_metadata
|
||||
- Env Var: RCLONE_S3_NO_SYSTEM_METADATA
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
### Metadata
|
||||
|
||||
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
|
||||
|
|
|
@ -789,19 +789,24 @@ Properties:
|
|||
|
||||
Upload and download chunk size.
|
||||
|
||||
This controls the maximum packet size used in the SFTP protocol. The
|
||||
RFC limits this to 32768 bytes (32k), however a lot of servers
|
||||
support larger sizes and setting it larger will increase transfer
|
||||
speed dramatically on high latency links.
|
||||
This controls the maximum size of payload in SFTP protocol packets.
|
||||
The RFC limits this to 32768 bytes (32k), which is the default. However,
|
||||
a lot of servers support larger sizes, typically limited to a maximum
|
||||
total package size of 256k, and setting it larger will increase transfer
|
||||
speed dramatically on high latency links. This includes OpenSSH, and,
|
||||
for example, using the value of 255k works well, leaving plenty of room
|
||||
for overhead while still being within a total packet size of 256k.
|
||||
|
||||
Only use a setting higher than 32k if you always connect to the same
|
||||
server or after sufficiently broad testing.
|
||||
|
||||
For example using the value of 252k with OpenSSH works well with its
|
||||
maximum packet size of 256k.
|
||||
|
||||
If you get the error "failed to send packet header: EOF" when copying
|
||||
a large file, try lowering this number.
|
||||
Make sure to test thoroughly before using a value higher than 32k,
|
||||
and only use it if you always connect to the same server or after
|
||||
sufficiently broad testing. If you get errors such as
|
||||
"failed to send packet payload: EOF", lots of "connection lost",
|
||||
or "corrupted on transfer", when copying a larger file, try lowering
|
||||
the value. The server run by [rclone serve sftp](/commands/rclone_serve_sftp)
|
||||
sends packets with standard 32k maximum payload so you must not
|
||||
set a different chunk_size when downloading files, but it accepts
|
||||
packets up to the 256k total size, so for uploads the chunk_size
|
||||
can be set as for the OpenSSH example above.
|
||||
|
||||
|
||||
Properties:
|
||||
|
|
|
@ -534,6 +534,38 @@ Properties:
|
|||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --swift-no-large-objects
|
||||
|
||||
Disable support for static and dynamic large objects
|
||||
|
||||
Swift cannot transparently store files bigger than 5 GiB. There are
|
||||
two schemes for doing that, static or dynamic large objects, and the
|
||||
API does not allow rclone to determine whether a file is a static or
|
||||
dynamic large object without doing a HEAD on the object. Since these
|
||||
need to be treated differently, this means rclone has to issue HEAD
|
||||
requests for objects for example when reading checksums.
|
||||
|
||||
When `no_large_objects` is set, rclone will assume that there are no
|
||||
static or dynamic large objects stored. This means it can stop doing
|
||||
the extra HEAD calls which in turn increases performance greatly
|
||||
especially when doing a swift to swift transfer with `--checksum` set.
|
||||
|
||||
Setting this option implies `no_chunk` and also that no files will be
|
||||
uploaded in chunks, so files bigger than 5 GiB will just fail on
|
||||
upload.
|
||||
|
||||
If you set this option and there *are* static or dynamic large objects,
|
||||
then this will give incorrect hashes for them. Downloads will succeed,
|
||||
but other operations such as Remove and Copy will fail.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_large_objects
|
||||
- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --swift-encoding
|
||||
|
||||
The encoding for the backend.
|
||||
|
|
Loading…
Reference in New Issue
Block a user