mirror of
https://github.com/rclone/rclone.git
synced 2024-11-22 08:46:24 +08:00
Version v1.58.0
This commit is contained in:
parent
ff1f173fc2
commit
f9354fff2f
5267
MANUAL.html
generated
5267
MANUAL.html
generated
File diff suppressed because it is too large
Load Diff
7550
MANUAL.txt
generated
7550
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
|
@ -99,9 +99,11 @@ Remote or path to alias.
|
|||
|
||||
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: remote
|
||||
- Env Var: RCLONE_ALIAS_REMOTE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
|
|
|
@ -168,10 +168,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_ACD_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --acd-client-secret
|
||||
|
||||
|
@ -179,10 +181,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_ACD_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -192,10 +196,12 @@ Here are the advanced options specific to amazon cloud drive (Amazon Drive).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_ACD_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --acd-auth-url
|
||||
|
||||
|
@ -203,10 +209,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_ACD_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --acd-token-url
|
||||
|
||||
|
@ -214,19 +222,23 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_ACD_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --acd-checkpoint
|
||||
|
||||
Checkpoint for internal polling (debug).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: checkpoint
|
||||
- Env Var: RCLONE_ACD_CHECKPOINT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --acd-upload-wait-per-gb
|
||||
|
||||
|
@ -252,6 +264,8 @@ of big files for a range of file sizes.
|
|||
Upload with the "-v" flag to see more info about what rclone is doing
|
||||
in this situation.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_wait_per_gb
|
||||
- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
|
||||
- Type: Duration
|
||||
|
@ -270,6 +284,8 @@ To download files above this threshold, rclone requests a "tempLink"
|
|||
which downloads the file through a temporary URL directly from the
|
||||
underlying S3 storage.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: templink_threshold
|
||||
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
|
||||
- Type: SizeSuffix
|
||||
|
@ -277,10 +293,12 @@ underlying S3 storage.
|
|||
|
||||
#### --acd-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_ACD_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -166,10 +166,12 @@ Storage Account Name.
|
|||
|
||||
Leave blank to use SAS URL or Emulator.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: account
|
||||
- Env Var: RCLONE_AZUREBLOB_ACCOUNT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-service-principal-file
|
||||
|
||||
|
@ -185,10 +187,12 @@ Leave blank normally. Needed only if you want to use a service principal instead
|
|||
See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: service_principal_file
|
||||
- Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-key
|
||||
|
||||
|
@ -196,10 +200,12 @@ Storage Account Key.
|
|||
|
||||
Leave blank to use SAS URL or Emulator.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: key
|
||||
- Env Var: RCLONE_AZUREBLOB_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-sas-url
|
||||
|
||||
|
@ -207,10 +213,12 @@ SAS URL for container level access only.
|
|||
|
||||
Leave blank if using account/key or Emulator.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: sas_url
|
||||
- Env Var: RCLONE_AZUREBLOB_SAS_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-use-msi
|
||||
|
||||
|
@ -225,6 +233,8 @@ the user-assigned identity will be used by default. If the resource has multiple
|
|||
identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
|
||||
msi_client_id, or msi_mi_res_id parameters.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_msi
|
||||
- Env Var: RCLONE_AZUREBLOB_USE_MSI
|
||||
- Type: bool
|
||||
|
@ -236,6 +246,8 @@ Uses local storage emulator if provided as 'true'.
|
|||
|
||||
Leave blank if using real azure storage endpoint.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_emulator
|
||||
- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR
|
||||
- Type: bool
|
||||
|
@ -251,10 +263,12 @@ Object ID of the user-assigned MSI to use, if any.
|
|||
|
||||
Leave blank if msi_client_id or msi_mi_res_id specified.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: msi_object_id
|
||||
- Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-msi-client-id
|
||||
|
||||
|
@ -262,10 +276,12 @@ Object ID of the user-assigned MSI to use, if any.
|
|||
|
||||
Leave blank if msi_object_id or msi_mi_res_id specified.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: msi_client_id
|
||||
- Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-msi-mi-res-id
|
||||
|
||||
|
@ -273,10 +289,12 @@ Azure resource ID of the user-assigned MSI to use, if any.
|
|||
|
||||
Leave blank if msi_client_id or msi_object_id specified.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: msi_mi_res_id
|
||||
- Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-endpoint
|
||||
|
||||
|
@ -284,32 +302,65 @@ Endpoint for the service.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_AZUREBLOB_ENDPOINT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-upload-cutoff
|
||||
|
||||
Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-chunk-size
|
||||
|
||||
Upload chunk size (<= 100 MiB).
|
||||
Upload chunk size.
|
||||
|
||||
Note that this is stored in memory and there may be up to
|
||||
"--transfers" chunks stored at once in memory.
|
||||
"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
|
||||
in memory.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 4Mi
|
||||
|
||||
#### --azureblob-upload-concurrency
|
||||
|
||||
Concurrency for multipart uploads.
|
||||
|
||||
This is the number of chunks of the same file that are uploaded
|
||||
concurrently.
|
||||
|
||||
If you are uploading small numbers of large files over high-speed
|
||||
links and these uploads do not fully utilize your bandwidth, then
|
||||
increasing this may help to speed up the transfers.
|
||||
|
||||
In tests, upload speed increases almost linearly with upload
|
||||
concurrency. For example to fill a gigabit pipe it may be necessary to
|
||||
raise this to 64. Note that this will use more memory.
|
||||
|
||||
Note that chunks are stored in memory and there may be up to
|
||||
"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
|
||||
in memory.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_concurrency
|
||||
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY
|
||||
- Type: int
|
||||
- Default: 16
|
||||
|
||||
#### --azureblob-list-chunk
|
||||
|
||||
Size of blob list.
|
||||
|
@ -322,6 +373,8 @@ minutes per megabyte on average, it will time out (
|
|||
). This can be used to limit the number of blobs items to return, to
|
||||
avoid the time out.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: list_chunk
|
||||
- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
|
||||
- Type: int
|
||||
|
@ -342,10 +395,12 @@ If blobs are in "archive tier" at remote, trying to perform data transfer
|
|||
operations from remote will not be allowed. User should first restore by
|
||||
tiering blob to "Hot" or "Cool".
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: access_tier
|
||||
- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-archive-tier-delete
|
||||
|
||||
|
@ -364,6 +419,8 @@ replacement. This has the potential for data loss if the upload fails
|
|||
archive tier blobs early may be chargable.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: archive_tier_delete
|
||||
- Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE
|
||||
- Type: bool
|
||||
|
@ -378,6 +435,8 @@ uploading it so it can add it to metadata on the object. This is great
|
|||
for data integrity checking but can cause long delays for large files
|
||||
to start uploading.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_checksum
|
||||
- Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM
|
||||
- Type: bool
|
||||
|
@ -390,6 +449,8 @@ How often internal memory buffer pools will be flushed.
|
|||
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
|
||||
This option controls how often unused buffers will be removed from the pool.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: memory_pool_flush_time
|
||||
- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME
|
||||
- Type: Duration
|
||||
|
@ -399,6 +460,8 @@ This option controls how often unused buffers will be removed from the pool.
|
|||
|
||||
Whether to use mmap buffers in internal memory pool.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: memory_pool_use_mmap
|
||||
- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP
|
||||
- Type: bool
|
||||
|
@ -406,10 +469,12 @@ Whether to use mmap buffers in internal memory pool.
|
|||
|
||||
#### --azureblob-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_AZUREBLOB_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
@ -419,10 +484,12 @@ See the [encoding section in the overview](/overview/#encoding) for more info.
|
|||
|
||||
Public access level of a container: blob or container.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: public_access
|
||||
- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- The container and its blobs can be accessed only with an authorized request.
|
||||
|
@ -436,6 +503,8 @@ Public access level of a container: blob or container.
|
|||
|
||||
If set, do not do HEAD before GET when getting objects.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_head_object
|
||||
- Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT
|
||||
- Type: bool
|
||||
|
|
|
@ -329,24 +329,30 @@ Here are the standard options specific to b2 (Backblaze B2).
|
|||
|
||||
Account ID or Application Key ID.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: account
|
||||
- Env Var: RCLONE_B2_ACCOUNT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --b2-key
|
||||
|
||||
Application Key.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: key
|
||||
- Env Var: RCLONE_B2_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --b2-hard-delete
|
||||
|
||||
Permanently delete files on remote removal, otherwise hide files.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hard_delete
|
||||
- Env Var: RCLONE_B2_HARD_DELETE
|
||||
- Type: bool
|
||||
|
@ -362,10 +368,12 @@ Endpoint for the service.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_B2_ENDPOINT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --b2-test-mode
|
||||
|
||||
|
@ -381,10 +389,12 @@ below will cause b2 to return specific errors:
|
|||
These will be set in the "X-Bz-Test-Mode" header which is documented
|
||||
in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: test_mode
|
||||
- Env Var: RCLONE_B2_TEST_MODE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --b2-versions
|
||||
|
||||
|
@ -393,6 +403,8 @@ Include old versions in directory listings.
|
|||
Note that when using this no file write operations are permitted,
|
||||
so you can't upload files or delete them.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: versions
|
||||
- Env Var: RCLONE_B2_VERSIONS
|
||||
- Type: bool
|
||||
|
@ -406,6 +418,8 @@ Files above this size will be uploaded in chunks of "--b2-chunk-size".
|
|||
|
||||
This value should be set no larger than 4.657 GiB (== 5 GB).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
|
@ -420,6 +434,8 @@ copied in chunks of this size.
|
|||
|
||||
The minimum is 0 and the maximum is 4.6 GiB.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: copy_cutoff
|
||||
- Env Var: RCLONE_B2_COPY_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
|
@ -436,6 +452,8 @@ might a maximum of "--transfers" chunks in progress at once.
|
|||
|
||||
5,000,000 Bytes is the minimum size.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_B2_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -450,6 +468,8 @@ uploading it so it can add it to metadata on the object. This is great
|
|||
for data integrity checking but can cause long delays for large files
|
||||
to start uploading.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_checksum
|
||||
- Env Var: RCLONE_B2_DISABLE_CHECKSUM
|
||||
- Type: bool
|
||||
|
@ -466,10 +486,20 @@ If the custom endpoint rewrites the requests for authentication,
|
|||
e.g., in Cloudflare Workers, this header needs to be handled properly.
|
||||
Leave blank if you want to use the endpoint provided by Backblaze.
|
||||
|
||||
The URL provided here SHOULD have the protocol and SHOULD NOT have
|
||||
a trailing slash or specify the /file/bucket subpath as rclone will
|
||||
request files with "{download_url}/file/{bucket_name}/{path}".
|
||||
|
||||
Example:
|
||||
> https://mysubdomain.mydomain.tld
|
||||
(No trailing "/", "file" or "bucket")
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: download_url
|
||||
- Env Var: RCLONE_B2_DOWNLOAD_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --b2-download-auth-duration
|
||||
|
||||
|
@ -478,6 +508,8 @@ Time before the authorization token will expire in s or suffix ms|s|m|h|d.
|
|||
The duration before the download authorization token will expire.
|
||||
The minimum value is 1 second. The maximum value is one week.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: download_auth_duration
|
||||
- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION
|
||||
- Type: Duration
|
||||
|
@ -489,6 +521,8 @@ How often internal memory buffer pools will be flushed.
|
|||
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
|
||||
This option controls how often unused buffers will be removed from the pool.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: memory_pool_flush_time
|
||||
- Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME
|
||||
- Type: Duration
|
||||
|
@ -498,6 +532,8 @@ This option controls how often unused buffers will be removed from the pool.
|
|||
|
||||
Whether to use mmap buffers in internal memory pool.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: memory_pool_use_mmap
|
||||
- Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP
|
||||
- Type: bool
|
||||
|
@ -505,10 +541,12 @@ Whether to use mmap buffers in internal memory pool.
|
|||
|
||||
#### --b2-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_B2_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -275,10 +275,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_BOX_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --box-client-secret
|
||||
|
||||
|
@ -286,10 +288,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_BOX_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --box-box-config-file
|
||||
|
||||
|
@ -299,10 +303,12 @@ Leave blank normally.
|
|||
|
||||
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: box_config_file
|
||||
- Env Var: RCLONE_BOX_BOX_CONFIG_FILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --box-access-token
|
||||
|
||||
|
@ -310,15 +316,19 @@ Box App Primary Access Token
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: access_token
|
||||
- Env Var: RCLONE_BOX_ACCESS_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --box-box-sub-type
|
||||
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: box_sub_type
|
||||
- Env Var: RCLONE_BOX_BOX_SUB_TYPE
|
||||
- Type: string
|
||||
|
@ -337,10 +347,12 @@ Here are the advanced options specific to box (Box).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_BOX_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --box-auth-url
|
||||
|
||||
|
@ -348,10 +360,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_BOX_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --box-token-url
|
||||
|
||||
|
@ -359,15 +373,19 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_BOX_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --box-root-folder-id
|
||||
|
||||
Fill in for rclone to use a non root folder as its starting point.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: root_folder_id
|
||||
- Env Var: RCLONE_BOX_ROOT_FOLDER_ID
|
||||
- Type: string
|
||||
|
@ -377,6 +395,8 @@ Fill in for rclone to use a non root folder as its starting point.
|
|||
|
||||
Cutoff for switching to multipart upload (>= 50 MiB).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
|
@ -386,6 +406,8 @@ Cutoff for switching to multipart upload (>= 50 MiB).
|
|||
|
||||
Max number of times to try committing a multipart file.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: commit_retries
|
||||
- Env Var: RCLONE_BOX_COMMIT_RETRIES
|
||||
- Type: int
|
||||
|
@ -395,6 +417,8 @@ Max number of times to try committing a multipart file.
|
|||
|
||||
Size of listing chunk 1-1000.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: list_chunk
|
||||
- Env Var: RCLONE_BOX_LIST_CHUNK
|
||||
- Type: int
|
||||
|
@ -404,17 +428,21 @@ Size of listing chunk 1-1000.
|
|||
|
||||
Only show items owned by the login (email address) passed in.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: owned_by
|
||||
- Env Var: RCLONE_BOX_OWNED_BY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --box-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_BOX_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -316,28 +316,34 @@ Remote to cache.
|
|||
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
|
||||
"myremote:bucket" or maybe "myremote:" (not recommended).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: remote
|
||||
- Env Var: RCLONE_CACHE_REMOTE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --cache-plex-url
|
||||
|
||||
The URL of the Plex server.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: plex_url
|
||||
- Env Var: RCLONE_CACHE_PLEX_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --cache-plex-username
|
||||
|
||||
The username of the Plex user.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: plex_username
|
||||
- Env Var: RCLONE_CACHE_PLEX_USERNAME
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --cache-plex-password
|
||||
|
||||
|
@ -345,10 +351,12 @@ The password of the Plex user.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: plex_password
|
||||
- Env Var: RCLONE_CACHE_PLEX_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --cache-chunk-size
|
||||
|
||||
|
@ -358,6 +366,8 @@ Use lower numbers for slower connections. If the chunk size is
|
|||
changed, any downloaded chunks will be invalid and cache-chunk-path
|
||||
will need to be cleared or unexpected EOF errors will occur.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_CACHE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -376,6 +386,8 @@ How long to cache file structure information (directory listings, file size, tim
|
|||
If all write operations are done through the cache then you can safely make
|
||||
this value very large as the cache store will also be updated in real time.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: info_age
|
||||
- Env Var: RCLONE_CACHE_INFO_AGE
|
||||
- Type: Duration
|
||||
|
@ -395,6 +407,8 @@ The total size that the chunks can take up on the local disk.
|
|||
If the cache exceeds this value then it will start to delete the
|
||||
oldest chunks until it goes under this value.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_total_size
|
||||
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -415,19 +429,23 @@ Here are the advanced options specific to cache (Cache a remote).
|
|||
|
||||
The plex token for authentication - auto set normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: plex_token
|
||||
- Env Var: RCLONE_CACHE_PLEX_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --cache-plex-insecure
|
||||
|
||||
Skip all certificate verification when connecting to the Plex server.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: plex_insecure
|
||||
- Env Var: RCLONE_CACHE_PLEX_INSECURE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --cache-db-path
|
||||
|
||||
|
@ -435,6 +453,8 @@ Directory to store file structure metadata DB.
|
|||
|
||||
The remote name is used as the DB file name.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: db_path
|
||||
- Env Var: RCLONE_CACHE_DB_PATH
|
||||
- Type: string
|
||||
|
@ -451,6 +471,8 @@ This config follows the "--cache-db-path". If you specify a custom
|
|||
location for "--cache-db-path" and don't specify one for "--cache-chunk-path"
|
||||
then "--cache-chunk-path" will use the same path as "--cache-db-path".
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_path
|
||||
- Env Var: RCLONE_CACHE_CHUNK_PATH
|
||||
- Type: string
|
||||
|
@ -460,6 +482,8 @@ then "--cache-chunk-path" will use the same path as "--cache-db-path".
|
|||
|
||||
Clear all the cached data for this remote on start.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: db_purge
|
||||
- Env Var: RCLONE_CACHE_DB_PURGE
|
||||
- Type: bool
|
||||
|
@ -473,6 +497,8 @@ The default value should be ok for most people. If you find that the
|
|||
cache goes over "cache-chunk-total-size" too often then try to lower
|
||||
this value to force it to perform cleanups more often.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_clean_interval
|
||||
- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
|
||||
- Type: Duration
|
||||
|
@ -490,6 +516,8 @@ cache isn't able to provide file data anymore.
|
|||
For really slow connections, increase this to a point where the stream is
|
||||
able to provide data but your experience will be very stuttering.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: read_retries
|
||||
- Env Var: RCLONE_CACHE_READ_RETRIES
|
||||
- Type: int
|
||||
|
@ -509,6 +537,8 @@ more fluid and data will be available much more faster to readers.
|
|||
setting will adapt to the type of reading performed and the value
|
||||
specified here will be used as a maximum number of workers to use.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: workers
|
||||
- Env Var: RCLONE_CACHE_WORKERS
|
||||
- Type: int
|
||||
|
@ -531,6 +561,8 @@ If the hardware permits it, use this feature to provide an overall better
|
|||
performance during streaming but it can also be disabled if RAM is not
|
||||
available on the local machine.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_no_memory
|
||||
- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
|
||||
- Type: bool
|
||||
|
@ -556,6 +588,8 @@ useless but it is available to set for more special cases.
|
|||
other API calls to the cloud provider like directory listings will
|
||||
still pass.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: rps
|
||||
- Env Var: RCLONE_CACHE_RPS
|
||||
- Type: int
|
||||
|
@ -569,6 +603,8 @@ If you need to read files immediately after you upload them through
|
|||
cache you can enable this flag to have their data stored in the
|
||||
cache store at the same time during upload.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: writes
|
||||
- Env Var: RCLONE_CACHE_WRITES
|
||||
- Type: bool
|
||||
|
@ -585,10 +621,12 @@ Specifying a value will enable this feature. Without it, it is
|
|||
completely disabled and files will be uploaded directly to the cloud
|
||||
provider
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: tmp_upload_path
|
||||
- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --cache-tmp-wait-time
|
||||
|
||||
|
@ -600,6 +638,8 @@ _cache-tmp-upload-path_ before it is selected for upload.
|
|||
Note that only one file is uploaded at a time and it can take longer
|
||||
to start the upload if a queue formed for this purpose.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: tmp_wait_time
|
||||
- Env Var: RCLONE_CACHE_TMP_WAIT_TIME
|
||||
- Type: Duration
|
||||
|
@ -615,6 +655,8 @@ error.
|
|||
|
||||
If you set it to 0 then it will wait forever.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: db_wait_time
|
||||
- Env Var: RCLONE_CACHE_DB_WAIT_TIME
|
||||
- Type: Duration
|
||||
|
@ -634,7 +676,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
|
|||
info on how to pass options and arguments.
|
||||
|
||||
These can be run on a running backend using the rc command
|
||||
[backend/command](/rc/#backend/command).
|
||||
[backend/command](/rc/#backend-command).
|
||||
|
||||
### stats
|
||||
|
||||
|
|
|
@ -5,6 +5,138 @@ description: "Rclone Changelog"
|
|||
|
||||
# Changelog
|
||||
|
||||
## v1.58.0 - 2022-03-18
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.57.0...v1.58.0)
|
||||
|
||||
* New backends
|
||||
* [Akamai Netstorage](/netstorage) (Nil Alexandrov)
|
||||
* [Seagate Lyve](/s3/#lyve), [SeaweedFS](/s3/#seaweedfs), [Storj](/s3/#storj), [RackCorp](/s3/#RackCorp) via s3 backend
|
||||
* [Storj](/storj/) (renamed from Tardigrade - your old config files will continue working)
|
||||
* New commands
|
||||
* [bisync](/bisync/) - experimental bidirectional cloud sync (Ivan Andreev, Chris Nelson)
|
||||
* New Features
|
||||
* build
|
||||
* Add `windows/arm64` build (`rclone mount` not supported yet) (Nick Craig-Wood)
|
||||
* Raise minimum go version to go1.15 (Nick Craig-Wood)
|
||||
* config: Allow dot in remote names and improve config editing (albertony)
|
||||
* dedupe: Add quit as a choice in interactive mode (albertony)
|
||||
* dlna: Change icons to the newest ones. (Alain Nussbaumer)
|
||||
* filter: Add [`{{ regexp }}` syntax](/filtering/#regexp) to pattern matches (Nick Craig-Wood)
|
||||
* fshttp: Add prometheus metrics for HTTP status code (Michał Matczuk)
|
||||
* hashsum: Support creating hash from data received on stdin (albertony)
|
||||
* librclone
|
||||
* Allow empty string or null input instead of empty json object (albertony)
|
||||
* Add support for mount commands (albertony)
|
||||
* operations: Add server-side moves to stats (Ole Frost)
|
||||
* rc: Allow user to disable authentication for web gui (negative0)
|
||||
* tree: Remove obsolete `--human` replaced by global `--human-readable` (albertony)
|
||||
* version: Report correct friendly-name for newer Windows 10/11 versions (albertony)
|
||||
* Bug Fixes
|
||||
* build
|
||||
* Fix ARM architecture version in .deb packages after nfpm change (Nick Craig-Wood)
|
||||
* Hard fork `github.com/jlaffaye/ftp` to fix `go get github.com/rclone/rclone` (Nick Craig-Wood)
|
||||
* oauthutil: Fix crash when webrowser requests `/robots.txt` (Nick Craig-Wood)
|
||||
* operations: Fix goroutine leak in case of copy retry (Ankur Gupta)
|
||||
* rc:
|
||||
* Fix `operations/publiclink` default for `expires` parameter (Nick Craig-Wood)
|
||||
* Fix missing computation of `transferQueueSize` when summing up statistics group (Carlo Mion)
|
||||
* Fix missing `StatsInfo` fields in the computation of the group sum (Carlo Mion)
|
||||
* sync: Fix `--max-duration` so it doesn't retry when the duration is exceeded (Nick Craig-Wood)
|
||||
* touch: Fix issue where a directory is created instead of a file (albertony)
|
||||
* Mount
|
||||
* Add `--devname` to set the device name sent to FUSE for mount display (Nick Craig-Wood)
|
||||
* VFS
|
||||
* Add `vfs/stats` remote control to show statistics (Nick Craig-Wood)
|
||||
* Fix `failed to _ensure cache internal error: downloaders is nil error` (Nick Craig-Wood)
|
||||
* Fix handling of special characters in file names (Bumsu Hyeon)
|
||||
* Local
|
||||
* Fix hash invalidation which caused errors with local crypt mount (Nick Craig-Wood)
|
||||
* Crypt
|
||||
* Add `base64` and `base32768` filename encoding options (Max Sum, Sinan Tan)
|
||||
* Azure Blob
|
||||
* Implement `--azureblob-upload-concurrency` parameter to speed uploads (Nick Craig-Wood)
|
||||
* Remove 100MB upper limit on `chunk_size` as it is no longer needed (Nick Craig-Wood)
|
||||
* Raise `--azureblob-upload-concurrency` to 16 by default (Nick Craig-Wood)
|
||||
* Fix crash with SAS URL and no container (Nick Craig-Wood)
|
||||
* Compress
|
||||
* Fix crash if metadata upload failed (Nick Craig-Wood)
|
||||
* Fix memory leak (Nick Craig-Wood)
|
||||
* Drive
|
||||
* Added `--drive-copy-shortcut-content` (Abhiraj)
|
||||
* Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)
|
||||
* See [the deprecation note](https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob).
|
||||
* Add `--drive-skip-dangling-shortcuts` flag (Nick Craig-Wood)
|
||||
* When using a link type `--drive-export-formats` shows all doc types (Nick Craig-Wood)
|
||||
* Dropbox
|
||||
* Speed up directory listings by specifying 1000 items in a chunk (Nick Craig-Wood)
|
||||
* Save an API request when at the root (Nick Craig-Wood)
|
||||
* Fichier
|
||||
* Implemented About functionality (Gourav T)
|
||||
* FTP
|
||||
* Add `--ftp-ask-password` to prompt for password when needed (Borna Butkovic)
|
||||
* Google Cloud Storage
|
||||
* Add missing regions (Nick Craig-Wood)
|
||||
* Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)
|
||||
* See [the deprecation note](https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob).
|
||||
* Googlephotos
|
||||
* Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)
|
||||
* See [the deprecation note](https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob).
|
||||
* Hasher
|
||||
* Fix crash on object not found (Nick Craig-Wood)
|
||||
* Hdfs
|
||||
* Add file (Move) and directory move (DirMove) support (Andy Jackson)
|
||||
* HTTP
|
||||
* Improved recognition of URL pointing to a single file (albertony)
|
||||
* Jottacloud
|
||||
* Change API used by recursive list (ListR) (Kim)
|
||||
* Add support for Tele2 Cloud (Fredric Arklid)
|
||||
* Koofr
|
||||
* Add Digistorage service as a Koofr provider. (jaKa)
|
||||
* Mailru
|
||||
* Fix int32 overflow on arm32 (Ivan Andreev)
|
||||
* Onedrive
|
||||
* Add config option for oauth scope `Sites.Read.All` (Charlie Jiang)
|
||||
* Minor optimization of quickxorhash (Isaac Levy)
|
||||
* Add `--onedrive-root-folder-id` flag (Nick Craig-Wood)
|
||||
* Do not retry on `400 pathIsTooLong` error (ctrl-q)
|
||||
* Pcloud
|
||||
* Add support for recursive list (ListR) (Niels van de Weem)
|
||||
* Fix pre-1970 time stamps (Nick Craig-Wood)
|
||||
* S3
|
||||
* Use `ListObjectsV2` for faster listings (Felix Bünemann)
|
||||
* Fallback to `ListObject` v1 on unsupported providers (Nick Craig-Wood)
|
||||
* Use the `ETag` on multipart transfers to verify the transfer was OK (Nick Craig-Wood)
|
||||
* Add `--s3-use-multipart-etag` provider quirk to disable this on unsupported providers (Nick Craig-Wood)
|
||||
* New Providers
|
||||
* RackCorp object storage (bbabich)
|
||||
* Seagate Lyve Cloud storage (Nick Craig-Wood)
|
||||
* SeaweedFS (Chris Lu)
|
||||
* Storj Shared gateways (Márton Elek, Nick Craig-Wood)
|
||||
* Add Wasabi AP Northeast 2 endpoint info (lindwurm)
|
||||
* Add `GLACIER_IR` storage class (Yunhai Luo)
|
||||
* Document `Content-MD5` workaround for object-lock enabled buckets (Paulo Martins)
|
||||
* Fix multipart upload with `--no-head` flag (Nick Craig-Wood)
|
||||
* Simplify content length processing in s3 with download url (Logeshwaran Murugesan)
|
||||
* SFTP
|
||||
* Add rclone to list of supported `md5sum`/`sha1sum` commands to look for (albertony)
|
||||
* Refactor so we only have one way of running remote commands (Nick Craig-Wood)
|
||||
* Fix timeout on hashing large files by sending keepalives (Nick Craig-Wood)
|
||||
* Fix unecessary seeking when uploading and downloading files (Nick Craig-Wood)
|
||||
* Update docs on how to create `known_hosts` file (Nick Craig-Wood)
|
||||
* Storj
|
||||
* Rename tardigrade backend to storj backend (Nick Craig-Wood)
|
||||
* Implement server side Move for files (Nick Craig-Wood)
|
||||
* Update docs to explain differences between s3 and this backend (Elek, Márton)
|
||||
* Swift
|
||||
* Fix About so it shows info about the current container only (Nick Craig-Wood)
|
||||
* Union
|
||||
* Fix treatment of remotes with `//` in (Nick Craig-Wood)
|
||||
* Fix deadlock when one part of a multi-upload fails (Nick Craig-Wood)
|
||||
* Fix eplus policy returned nil (Vitor Arruda)
|
||||
* Yandex
|
||||
* Add permanent deletion support (deinferno)
|
||||
|
||||
## v1.57.0 - 2021-11-01
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.56.0...v1.57.0)
|
||||
|
|
|
@ -322,15 +322,19 @@ Remote to chunk/unchunk.
|
|||
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
|
||||
"myremote:bucket" or maybe "myremote:" (not recommended).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: remote
|
||||
- Env Var: RCLONE_CHUNKER_REMOTE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --chunker-chunk-size
|
||||
|
||||
Files larger than chunk size will be split in chunks.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -342,6 +346,8 @@ Choose how chunker handles hash sums.
|
|||
|
||||
All modes but "none" require metadata.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hash_type
|
||||
- Env Var: RCLONE_CHUNKER_HASH_TYPE
|
||||
- Type: string
|
||||
|
@ -378,6 +384,8 @@ If chunk number has less digits than the number of hashes, it is left-padded by
|
|||
If there are more digits in the number, they are left as is.
|
||||
Possible chunk files are ignored if their name does not match given format.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: name_format
|
||||
- Env Var: RCLONE_CHUNKER_NAME_FORMAT
|
||||
- Type: string
|
||||
|
@ -389,6 +397,8 @@ Minimum valid chunk number. Usually 0 or 1.
|
|||
|
||||
By default chunk numbers start from 1.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: start_from
|
||||
- Env Var: RCLONE_CHUNKER_START_FROM
|
||||
- Type: int
|
||||
|
@ -401,6 +411,8 @@ Format of the metadata object or "none".
|
|||
By default "simplejson".
|
||||
Metadata is a small JSON file named after the composite file.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: meta_format
|
||||
- Env Var: RCLONE_CHUNKER_META_FORMAT
|
||||
- Type: string
|
||||
|
@ -418,6 +430,8 @@ Metadata is a small JSON file named after the composite file.
|
|||
|
||||
Choose how chunker should handle files with missing or invalid chunks.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: fail_hard
|
||||
- Env Var: RCLONE_CHUNKER_FAIL_HARD
|
||||
- Type: bool
|
||||
|
@ -432,6 +446,8 @@ Choose how chunker should handle files with missing or invalid chunks.
|
|||
|
||||
Choose how chunker should handle temporary files during transactions.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: transactions
|
||||
- Env Var: RCLONE_CHUNKER_TRANSACTIONS
|
||||
- Type: string
|
||||
|
|
|
@ -36,7 +36,8 @@ See the [global flags page](/flags/) for global options not listed here.
|
|||
|
||||
* [rclone about](/commands/rclone_about/) - Get quota information from the remote.
|
||||
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
|
||||
* [rclone backend](/commands/rclone_backend/) - Run a backend specific command.
|
||||
* [rclone backend](/commands/rclone_backend/) - Run a backend-specific command.
|
||||
* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectonal synchronization between two paths.
|
||||
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
|
||||
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
|
||||
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
|
||||
|
|
|
@ -42,7 +42,7 @@ Applying a `--full` flag to the command prints the bytes in full, e.g.
|
|||
Trashed: 104857602
|
||||
Other: 8849156022
|
||||
|
||||
A `--json` flag generates conveniently computer readable output, e.g.
|
||||
A `--json` flag generates conveniently machine-readable output, e.g.
|
||||
|
||||
{
|
||||
"total": 18253611008,
|
||||
|
|
|
@ -1,18 +1,18 @@
|
|||
---
|
||||
title: "rclone backend"
|
||||
description: "Run a backend specific command."
|
||||
description: "Run a backend-specific command."
|
||||
slug: rclone_backend
|
||||
url: /commands/rclone_backend/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/backend/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone backend
|
||||
|
||||
Run a backend specific command.
|
||||
Run a backend-specific command.
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
This runs a backend specific command. The commands themselves (except
|
||||
This runs a backend-specific command. The commands themselves (except
|
||||
for "help" and "features") are defined by the backends and you should
|
||||
see the backend docs for definitions.
|
||||
|
||||
|
@ -22,7 +22,7 @@ You can discover what commands a backend implements by using
|
|||
rclone backend help <backendname>
|
||||
|
||||
You can also discover information about the backend using (see
|
||||
[operations/fsinfo](/rc/#operations/fsinfo) in the remote control docs
|
||||
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
|
||||
for more info).
|
||||
|
||||
rclone backend features remote:
|
||||
|
@ -36,7 +36,7 @@ Pass arguments to the backend by placing them on the end of the line
|
|||
rclone backend cleanup remote:path file1 file2 file3
|
||||
|
||||
Note to run these commands on a running backend then see
|
||||
[backend/command](/rc/#backend/command) in the rc docs.
|
||||
[backend/command](/rc/#backend-command) in the rc docs.
|
||||
|
||||
|
||||
```
|
||||
|
|
|
@ -13,7 +13,7 @@ Checks the files in the source and destination match.
|
|||
|
||||
|
||||
Checks the files in the source and destination match. It compares
|
||||
sizes and hashes (MD5 or SHA1) and logs a report of files which don't
|
||||
sizes and hashes (MD5 or SHA1) and logs a report of files that don't
|
||||
match. It doesn't alter the source or destination.
|
||||
|
||||
If you supply the `--size-only` flag, it will only compare the sizes not
|
||||
|
|
|
@ -15,7 +15,7 @@ Create a new remote with name, type and options.
|
|||
Create a new remote of `name` with `type` and options. The options
|
||||
should be passed in pairs of `key` `value` or as `key=value`.
|
||||
|
||||
For example to make a swift remote of name myremote using auto config
|
||||
For example, to make a swift remote of name myremote using auto config
|
||||
you would do:
|
||||
|
||||
rclone config create myremote swift env_auth true
|
||||
|
@ -107,9 +107,8 @@ At the end of the non interactive process, rclone will return a result
|
|||
with `State` as empty string.
|
||||
|
||||
If `--all` is passed then rclone will ask all the config questions,
|
||||
not just the post config questions. Parameters that are supplied on
|
||||
the command line or from environment variables are used as defaults
|
||||
for questions as usual.
|
||||
not just the post config questions. Any parameters are used as
|
||||
defaults for questions as usual.
|
||||
|
||||
Note that `bin/config.py` in the rclone source implements this protocol
|
||||
as a readable demonstration.
|
||||
|
|
|
@ -16,7 +16,7 @@ Update an existing remote's password. The password
|
|||
should be passed in pairs of `key` `password` or as `key=password`.
|
||||
The `password` should be passed in in clear (unobscured).
|
||||
|
||||
For example to set password of a remote of name myremote you would do:
|
||||
For example, to set password of a remote of name myremote you would do:
|
||||
|
||||
rclone config password myremote fieldname mypassword
|
||||
rclone config password myremote fieldname=mypassword
|
||||
|
|
|
@ -15,7 +15,7 @@ Update options in an existing remote.
|
|||
Update an existing remote's options. The options should be passed in
|
||||
pairs of `key` `value` or as `key=value`.
|
||||
|
||||
For example to update the env_auth field of a remote of name myremote
|
||||
For example, to update the env_auth field of a remote of name myremote
|
||||
you would do:
|
||||
|
||||
rclone config update myremote env_auth true
|
||||
|
|
|
@ -32,7 +32,7 @@ name. It will do this iteratively until all the identically named
|
|||
directories have been merged.
|
||||
|
||||
Next, if deduping by name, for every group of duplicate file names /
|
||||
hashes, it will delete all but one identical files it finds without
|
||||
hashes, it will delete all but one identical file it finds without
|
||||
confirmation. This means that for most duplicated files the `dedupe` command will not be interactive.
|
||||
|
||||
`dedupe` considers files to be identical if they have the
|
||||
|
@ -43,7 +43,7 @@ identical if they have the same size (any hash will be ignored). This
|
|||
can be useful on crypt backends which do not support hashes.
|
||||
|
||||
Next rclone will resolve the remaining duplicates. Exactly which
|
||||
action is taken depends on the dedupe mode. By default rclone will
|
||||
action is taken depends on the dedupe mode. By default, rclone will
|
||||
interactively query the user for each one.
|
||||
|
||||
**Important**: Since this can cause data loss, test first with the
|
||||
|
@ -74,8 +74,7 @@ Now the `dedupe` session
|
|||
s) Skip and do nothing
|
||||
k) Keep just one (choose which in next step)
|
||||
r) Rename all to be different (by changing file.jpg to file-1.jpg)
|
||||
q) Quit
|
||||
s/k/r/q> k
|
||||
s/k/r> k
|
||||
Enter the number of the file to keep> 1
|
||||
one.txt: Deleted 1 extra copies
|
||||
two.txt: Found 3 files with duplicate names
|
||||
|
@ -86,8 +85,7 @@ Now the `dedupe` session
|
|||
s) Skip and do nothing
|
||||
k) Keep just one (choose which in next step)
|
||||
r) Rename all to be different (by changing file.jpg to file-1.jpg)
|
||||
q) Quit
|
||||
s/k/r/q> r
|
||||
s/k/r> r
|
||||
two-1.txt: renamed from: two.txt
|
||||
two-2.txt: renamed from: two.txt
|
||||
two-3.txt: renamed from: two.txt
|
||||
|
@ -112,7 +110,7 @@ Dedupe can be run non interactively using the `--dedupe-mode` flag or by using a
|
|||
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
|
||||
* `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing.
|
||||
|
||||
For example to rename all the identically named photos in your Google Photos directory, do
|
||||
For example, to rename all the identically named photos in your Google Photos directory, do
|
||||
|
||||
rclone dedupe --dedupe-mode rename "drive:Google Photos"
|
||||
|
||||
|
@ -128,7 +126,7 @@ rclone dedupe [mode] remote:path [flags]
|
|||
## Options
|
||||
|
||||
```
|
||||
--by-hash Find indentical hashes rather than names
|
||||
--by-hash Find identical hashes rather than names
|
||||
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename (default "interactive")
|
||||
-h, --help help for dedupe
|
||||
```
|
||||
|
|
|
@ -21,6 +21,11 @@ not supported by the remote, no hash will be returned. With the
|
|||
download flag, the file will be downloaded from the remote and
|
||||
hashed locally enabling any hash for any remote.
|
||||
|
||||
This command can also hash data received on standard input (stdin),
|
||||
by not passing a remote:path, or by passing a hyphen as remote:path
|
||||
when there is data to read (if not, the hypen will be treated literaly,
|
||||
as a relative path).
|
||||
|
||||
Run without a hash to see the list of all supported hashes, e.g.
|
||||
|
||||
$ rclone hashsum
|
||||
|
|
|
@ -34,17 +34,17 @@ There are several related list commands
|
|||
* `lsf` to list objects and directories in easy to parse format
|
||||
* `lsjson` to list objects and directories in JSON format
|
||||
|
||||
`ls`,`lsl`,`lsd` are designed to be human readable.
|
||||
`lsf` is designed to be human and machine readable.
|
||||
`lsjson` is designed to be machine readable.
|
||||
`ls`,`lsl`,`lsd` are designed to be human-readable.
|
||||
`lsf` is designed to be human and machine-readable.
|
||||
`lsjson` is designed to be machine-readable.
|
||||
|
||||
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
|
||||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non existent directory will produce an error except for
|
||||
Listing a non-existent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket based remotes).
|
||||
the bucket-based remotes).
|
||||
|
||||
|
||||
```
|
||||
|
|
|
@ -44,17 +44,17 @@ There are several related list commands
|
|||
* `lsf` to list objects and directories in easy to parse format
|
||||
* `lsjson` to list objects and directories in JSON format
|
||||
|
||||
`ls`,`lsl`,`lsd` are designed to be human readable.
|
||||
`lsf` is designed to be human and machine readable.
|
||||
`lsjson` is designed to be machine readable.
|
||||
`ls`,`lsl`,`lsd` are designed to be human-readable.
|
||||
`lsf` is designed to be human and machine-readable.
|
||||
`lsjson` is designed to be machine-readable.
|
||||
|
||||
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
|
||||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non existent directory will produce an error except for
|
||||
Listing a non-existent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket based remotes).
|
||||
the bucket-based remotes).
|
||||
|
||||
|
||||
```
|
||||
|
|
|
@ -59,13 +59,13 @@ can be returned as an empty string if it isn't available on the object
|
|||
the object and "UNSUPPORTED" if that object does not support that hash
|
||||
type.
|
||||
|
||||
For example to emulate the md5sum command you can use
|
||||
For example, to emulate the md5sum command you can use
|
||||
|
||||
rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
|
||||
|
||||
Eg
|
||||
|
||||
$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
|
||||
$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
|
||||
7908e352297f0f530b84a756f188baa3 bevajer5jef
|
||||
cd65ac234e6fea5925974a51cdd865cc canole
|
||||
03b5341b4f234b9d984d03ad076bae91 diwogej7
|
||||
|
@ -100,7 +100,7 @@ Eg
|
|||
Note that the --absolute parameter is useful for making lists of files
|
||||
to pass to an rclone copy with the --files-from-raw flag.
|
||||
|
||||
For example to find all the files modified within one day and copy
|
||||
For example, to find all the files modified within one day and copy
|
||||
those only (without traversing the whole directory structure):
|
||||
|
||||
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
|
||||
|
@ -117,17 +117,17 @@ There are several related list commands
|
|||
* `lsf` to list objects and directories in easy to parse format
|
||||
* `lsjson` to list objects and directories in JSON format
|
||||
|
||||
`ls`,`lsl`,`lsd` are designed to be human readable.
|
||||
`lsf` is designed to be human and machine readable.
|
||||
`lsjson` is designed to be machine readable.
|
||||
`ls`,`lsl`,`lsd` are designed to be human-readable.
|
||||
`lsf` is designed to be human and machine-readable.
|
||||
`lsjson` is designed to be machine-readable.
|
||||
|
||||
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
|
||||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non existent directory will produce an error except for
|
||||
Listing a non-existent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket based remotes).
|
||||
the bucket-based remotes).
|
||||
|
||||
|
||||
```
|
||||
|
|
|
@ -66,7 +66,7 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
|
|||
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
|
||||
When used without --recursive the Path will always be the same as Name.
|
||||
|
||||
If the directory is a bucket in a bucket based backend, then
|
||||
If the directory is a bucket in a bucket-based backend, then
|
||||
"IsBucket" will be set to true. This key won't be present unless it is
|
||||
"true".
|
||||
|
||||
|
@ -91,17 +91,17 @@ There are several related list commands
|
|||
* `lsf` to list objects and directories in easy to parse format
|
||||
* `lsjson` to list objects and directories in JSON format
|
||||
|
||||
`ls`,`lsl`,`lsd` are designed to be human readable.
|
||||
`lsf` is designed to be human and machine readable.
|
||||
`lsjson` is designed to be machine readable.
|
||||
`ls`,`lsl`,`lsd` are designed to be human-readable.
|
||||
`lsf` is designed to be human and machine-readable.
|
||||
`lsjson` is designed to be machine-readable.
|
||||
|
||||
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
|
||||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non existent directory will produce an error except for
|
||||
Listing a non-existent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket based remotes).
|
||||
the bucket-based remotes).
|
||||
|
||||
|
||||
```
|
||||
|
|
|
@ -34,17 +34,17 @@ There are several related list commands
|
|||
* `lsf` to list objects and directories in easy to parse format
|
||||
* `lsjson` to list objects and directories in JSON format
|
||||
|
||||
`ls`,`lsl`,`lsd` are designed to be human readable.
|
||||
`lsf` is designed to be human and machine readable.
|
||||
`lsjson` is designed to be machine readable.
|
||||
`ls`,`lsl`,`lsd` are designed to be human-readable.
|
||||
`lsf` is designed to be human and machine-readable.
|
||||
`lsjson` is designed to be machine-readable.
|
||||
|
||||
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
|
||||
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
|
||||
|
||||
Listing a non existent directory will produce an error except for
|
||||
Listing a non-existent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket based remotes).
|
||||
the bucket-based remotes).
|
||||
|
||||
|
||||
```
|
||||
|
|
|
@ -20,6 +20,11 @@ not supported by the remote, no hash will be returned. With the
|
|||
download flag, the file will be downloaded from the remote and
|
||||
hashed locally enabling MD5 for any remote.
|
||||
|
||||
This command can also hash data received on standard input (stdin),
|
||||
by not passing a remote:path, or by passing a hyphen as remote:path
|
||||
when there is data to read (if not, the hypen will be treated literaly,
|
||||
as a relative path).
|
||||
|
||||
|
||||
```
|
||||
rclone md5sum remote:path [flags]
|
||||
|
|
|
@ -75,7 +75,7 @@ at all, then 1 PiB is set as both the total and the free size.
|
|||
To run rclone mount on Windows, you will need to
|
||||
download and install [WinFsp](http://www.secfs.net/winfsp/).
|
||||
|
||||
[WinFsp](https://github.com/billziss-gh/winfsp) is an open source
|
||||
[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source
|
||||
Windows File System Proxy which makes it easy to write user space file
|
||||
systems for Windows. It provides a FUSE emulation layer which rclone
|
||||
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
|
||||
|
@ -245,7 +245,7 @@ applications won't work with their files on an rclone mount without
|
|||
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
|
||||
See the [VFS File Caching](#vfs-file-caching) section for more info.
|
||||
|
||||
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
|
||||
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2,
|
||||
Hubic) do not support the concept of empty directories, so empty
|
||||
directories will have a tendency to disappear once they fall out of
|
||||
the directory cache.
|
||||
|
@ -689,6 +689,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
|
|||
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
|
||||
--debug-fuse Debug the FUSE internals - needs -v
|
||||
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
|
||||
--devname string Set the device name - default is remote:path
|
||||
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
|
||||
--dir-perms FileMode Directory permissions (default 0777)
|
||||
--file-perms FileMode File permissions (default 0666)
|
||||
|
|
|
@ -11,7 +11,7 @@ Obscure password for use in the rclone config file.
|
|||
|
||||
## Synopsis
|
||||
|
||||
In the rclone config file, human readable passwords are
|
||||
In the rclone config file, human-readable passwords are
|
||||
obscured. Obscuring them is done by encrypting them and writing them
|
||||
out in base64. This is **not** a secure way of encrypting these
|
||||
passwords as rclone can decrypt them - it is to prevent "eyedropping"
|
||||
|
|
|
@ -349,6 +349,7 @@ rclone serve docker [flags]
|
|||
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
|
||||
--debug-fuse Debug the FUSE internals - needs -v
|
||||
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
|
||||
--devname string Set the device name - default is remote:path
|
||||
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
|
||||
--dir-perms FileMode Directory permissions (default 0777)
|
||||
--file-perms FileMode File permissions (default 0666)
|
||||
|
|
|
@ -59,6 +59,9 @@ supply --client-ca also.
|
|||
of that with the CA certificate. --key should be the PEM encoded
|
||||
private key and --client-ca should be the PEM encoded client
|
||||
certificate authority certificate.
|
||||
|
||||
### Template
|
||||
|
||||
--template allows a user to specify a custom markup template for http
|
||||
and webdav serve functions. The server exports the following markup
|
||||
to be used within the template to server pages:
|
||||
|
|
|
@ -15,7 +15,7 @@ rclone serve restic implements restic's REST backend API
|
|||
over HTTP. This allows restic to use rclone as a data storage
|
||||
mechanism for cloud providers that restic does not support directly.
|
||||
|
||||
[Restic](https://restic.net/) is a command line program for doing
|
||||
[Restic](https://restic.net/) is a command-line program for doing
|
||||
backups.
|
||||
|
||||
The server will log errors. Use -v to see access logs.
|
||||
|
@ -194,7 +194,7 @@ rclone serve restic remote:path [flags]
|
|||
--max-header-bytes int Maximum size of request header (default 4096)
|
||||
--pass string Password for authentication
|
||||
--private-repos Users can only access their private repo
|
||||
--realm string realm for authentication (default "rclone")
|
||||
--realm string Realm for authentication (default "rclone")
|
||||
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
|
||||
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
|
||||
--stdio Run an HTTP2 server on stdin/stdout
|
||||
|
|
|
@ -49,6 +49,17 @@ be used with sshd via ~/.ssh/authorized_keys, for example:
|
|||
|
||||
restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
|
||||
|
||||
On the client you need to set "--transfers 1" when using --stdio.
|
||||
Otherwise multiple instances of the rclone server are started by OpenSSH
|
||||
which can lead to "corrupted on transfer" errors. This is the case because
|
||||
the client chooses indiscriminately which server to send commands to while
|
||||
the servers all have different views of the state of the filing system.
|
||||
|
||||
The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing
|
||||
used. Omitting "restrict" and using --sftp-path-override to enable
|
||||
checksumming is possible but less secure and you could use the SFTP server
|
||||
provided by OpenSSH in this case.
|
||||
|
||||
|
||||
## VFS - Virtual File System
|
||||
|
||||
|
@ -341,7 +352,7 @@ together, if `--auth-proxy` is set the authorized keys option will be
|
|||
ignored.
|
||||
|
||||
There is an example program
|
||||
[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/bin/test_proxy.py)
|
||||
[bin/test_proxy.py](https://github.com/rclone/rclone/blob/master/test_proxy.py)
|
||||
in the rclone source code.
|
||||
|
||||
The program's job is to take a `user` and `pass` on the input and turn
|
||||
|
|
|
@ -501,7 +501,7 @@ rclone serve webdav remote:path [flags]
|
|||
--pass string Password for authentication
|
||||
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
|
||||
--read-only Mount read-only
|
||||
--realm string realm for authentication (default "rclone")
|
||||
--realm string Realm for authentication (default "rclone")
|
||||
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
|
||||
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
|
||||
--template string User-specified template
|
||||
|
|
|
@ -20,6 +20,14 @@ not supported by the remote, no hash will be returned. With the
|
|||
download flag, the file will be downloaded from the remote and
|
||||
hashed locally enabling SHA-1 for any remote.
|
||||
|
||||
This command can also hash data received on standard input (stdin),
|
||||
by not passing a remote:path, or by passing a hyphen as remote:path
|
||||
when there is data to read (if not, the hypen will be treated literaly,
|
||||
as a relative path).
|
||||
|
||||
This command can also hash data received on STDIN, if not passing
|
||||
a remote:path.
|
||||
|
||||
|
||||
```
|
||||
rclone sha1sum remote:path [flags]
|
||||
|
|
|
@ -25,7 +25,7 @@ For example
|
|||
└── subdir
|
||||
├── file4
|
||||
└── file5
|
||||
|
||||
|
||||
1 directories, 5 files
|
||||
|
||||
You can use any of the filtering options with the tree command (e.g.
|
||||
|
@ -49,7 +49,6 @@ rclone tree remote:path [flags]
|
|||
--dirsfirst List directories before files (-U disables)
|
||||
--full-path Print the full path prefix for each file
|
||||
-h, --help help for tree
|
||||
--human Print the size in a more human readable way.
|
||||
--level int Descend only level directories deep
|
||||
-D, --modtime Print the date of last modification.
|
||||
--noindent Don't print indentation lines
|
||||
|
|
|
@ -96,15 +96,19 @@ Here are the standard options specific to compress (Compress a remote).
|
|||
|
||||
Remote to compress.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: remote
|
||||
- Env Var: RCLONE_COMPRESS_REMOTE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --compress-mode
|
||||
|
||||
Compression mode.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: mode
|
||||
- Env Var: RCLONE_COMPRESS_MODE
|
||||
- Type: string
|
||||
|
@ -129,6 +133,8 @@ Level -2 uses Huffmann encoding only. Only use if you know what you
|
|||
are doing.
|
||||
Level 0 turns off compression.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: level
|
||||
- Env Var: RCLONE_COMPRESS_LEVEL
|
||||
- Type: int
|
||||
|
@ -143,6 +149,8 @@ it's size.
|
|||
Files smaller than this limit will be cached in RAM, files larger than
|
||||
this limit will be cached on disk.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: ram_cache_limit
|
||||
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
|
||||
- Type: SizeSuffix
|
||||
|
|
|
@ -428,15 +428,19 @@ Remote to encrypt/decrypt.
|
|||
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
|
||||
"myremote:bucket" or maybe "myremote:" (not recommended).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: remote
|
||||
- Env Var: RCLONE_CRYPT_REMOTE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --crypt-filename-encryption
|
||||
|
||||
How to encrypt the filenames.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: filename_encryption
|
||||
- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
|
||||
- Type: string
|
||||
|
@ -457,6 +461,8 @@ Option to either encrypt directory names or leave them intact.
|
|||
|
||||
NB If filename_encryption is "off" then this option will do nothing.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: directory_name_encryption
|
||||
- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
|
||||
- Type: bool
|
||||
|
@ -473,10 +479,12 @@ Password or pass phrase for encryption.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: password
|
||||
- Env Var: RCLONE_CRYPT_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --crypt-password2
|
||||
|
||||
|
@ -487,10 +495,12 @@ Should be different to the previous password.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: password2
|
||||
- Env Var: RCLONE_CRYPT_PASSWORD2
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -509,6 +519,8 @@ pointing to two different directories with the single changed
|
|||
parameter and use rclone move to move the files between the crypt
|
||||
remotes.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: server_side_across_configs
|
||||
- Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS
|
||||
- Type: bool
|
||||
|
@ -526,6 +538,8 @@ This is so you can work out which encrypted names are which decrypted
|
|||
names just in case you need to do something with the encrypted file
|
||||
names, or for debugging purposes.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: show_mapping
|
||||
- Env Var: RCLONE_CRYPT_SHOW_MAPPING
|
||||
- Type: bool
|
||||
|
@ -535,6 +549,8 @@ names, or for debugging purposes.
|
|||
|
||||
Option to either encrypt file data or leave it unencrypted.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_data_encryption
|
||||
- Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION
|
||||
- Type: bool
|
||||
|
@ -545,6 +561,29 @@ Option to either encrypt file data or leave it unencrypted.
|
|||
- "false"
|
||||
- Encrypt file data.
|
||||
|
||||
#### --crypt-filename-encoding
|
||||
|
||||
How to encode the encrypted filename to text string.
|
||||
|
||||
This option could help with shortening the encrypted filename. The
|
||||
suitable option would depend on the way your remote count the filename
|
||||
length and if it's case sensitve.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: filename_encoding
|
||||
- Env Var: RCLONE_CRYPT_FILENAME_ENCODING
|
||||
- Type: string
|
||||
- Default: "base32"
|
||||
- Examples:
|
||||
- "base32"
|
||||
- Encode using base32. Suitable for all remote.
|
||||
- "base64"
|
||||
- Encode using base64. Suitable for case sensitive remote.
|
||||
- "base32768"
|
||||
- Encode using base32768. Suitable if your remote counts UTF-16 or
|
||||
- Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive)
|
||||
|
||||
## Backend commands
|
||||
|
||||
Here are the commands specific to the crypt backend.
|
||||
|
@ -559,7 +598,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
|
|||
info on how to pass options and arguments.
|
||||
|
||||
These can be run on a running backend using the rc command
|
||||
[backend/command](/rc/#backend/command).
|
||||
[backend/command](/rc/#backend-command).
|
||||
|
||||
### encode
|
||||
|
||||
|
|
|
@ -554,10 +554,12 @@ Setting your own is recommended.
|
|||
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
|
||||
If you leave this blank, it will use an internal key which is low performance.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_DRIVE_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-client-secret
|
||||
|
||||
|
@ -565,19 +567,23 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_DRIVE_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-scope
|
||||
|
||||
Scope that rclone should use when requesting access from drive.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: scope
|
||||
- Env Var: RCLONE_DRIVE_SCOPE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "drive"
|
||||
- Full access all files, excluding Application Data Folder.
|
||||
|
@ -603,10 +609,12 @@ Fill in to access "Computers" folders (see docs), or for rclone to use
|
|||
a non root folder as its starting point.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: root_folder_id
|
||||
- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-service-account-file
|
||||
|
||||
|
@ -617,15 +625,19 @@ Needed only if you want use SA instead of interactive login.
|
|||
|
||||
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: service_account_file
|
||||
- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-alternate-export
|
||||
|
||||
Deprecated: No longer needed.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: alternate_export
|
||||
- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
|
||||
- Type: bool
|
||||
|
@ -639,10 +651,12 @@ Here are the advanced options specific to drive (Google Drive).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_DRIVE_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-auth-url
|
||||
|
||||
|
@ -650,10 +664,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_DRIVE_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-token-url
|
||||
|
||||
|
@ -661,10 +677,12 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_DRIVE_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-service-account-credentials
|
||||
|
||||
|
@ -673,24 +691,30 @@ Service Account Credentials JSON blob.
|
|||
Leave blank normally.
|
||||
Needed only if you want use SA instead of interactive login.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: service_account_credentials
|
||||
- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-team-drive
|
||||
|
||||
ID of the Shared Drive (Team Drive).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: team_drive
|
||||
- Env Var: RCLONE_DRIVE_TEAM_DRIVE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-auth-owner-only
|
||||
|
||||
Only consider files owned by the authenticated user.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_owner_only
|
||||
- Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
|
||||
- Type: bool
|
||||
|
@ -703,17 +727,38 @@ Send files to the trash instead of deleting permanently.
|
|||
Defaults to true, namely sending files to the trash.
|
||||
Use `--drive-use-trash=false` to delete files permanently instead.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_trash
|
||||
- Env Var: RCLONE_DRIVE_USE_TRASH
|
||||
- Type: bool
|
||||
- Default: true
|
||||
|
||||
#### --drive-copy-shortcut-content
|
||||
|
||||
Server side copy contents of shortcuts instead of the shortcut.
|
||||
|
||||
When doing server side copies, normally rclone will copy shortcuts as
|
||||
shortcuts.
|
||||
|
||||
If this flag is used then rclone will copy the contents of shortcuts
|
||||
rather than shortcuts themselves when doing server side copies.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: copy_shortcut_content
|
||||
- Env Var: RCLONE_DRIVE_COPY_SHORTCUT_CONTENT
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --drive-skip-gdocs
|
||||
|
||||
Skip google documents in all listings.
|
||||
|
||||
If given, gdocs practically become invisible to rclone.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: skip_gdocs
|
||||
- Env Var: RCLONE_DRIVE_SKIP_GDOCS
|
||||
- Type: bool
|
||||
|
@ -734,6 +779,8 @@ Google photos are identified by being in the "photos" space.
|
|||
Corrupted checksums are caused by Google modifying the image/video but
|
||||
not updating the checksum.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: skip_checksum_gphotos
|
||||
- Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS
|
||||
- Type: bool
|
||||
|
@ -750,6 +797,8 @@ with you).
|
|||
This works both with the "list" (lsd, lsl, etc.) and the "copy"
|
||||
commands (copy, sync, etc.), and with all other commands too.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: shared_with_me
|
||||
- Env Var: RCLONE_DRIVE_SHARED_WITH_ME
|
||||
- Type: bool
|
||||
|
@ -761,6 +810,8 @@ Only show files that are in the trash.
|
|||
|
||||
This will show trashed files in their original directory structure.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: trashed_only
|
||||
- Env Var: RCLONE_DRIVE_TRASHED_ONLY
|
||||
- Type: bool
|
||||
|
@ -770,6 +821,8 @@ This will show trashed files in their original directory structure.
|
|||
|
||||
Only show files that are starred.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: starred_only
|
||||
- Env Var: RCLONE_DRIVE_STARRED_ONLY
|
||||
- Type: bool
|
||||
|
@ -779,15 +832,19 @@ Only show files that are starred.
|
|||
|
||||
Deprecated: See export_formats.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: formats
|
||||
- Env Var: RCLONE_DRIVE_FORMATS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-export-formats
|
||||
|
||||
Comma separated list of preferred formats for downloading Google docs.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: export_formats
|
||||
- Env Var: RCLONE_DRIVE_EXPORT_FORMATS
|
||||
- Type: string
|
||||
|
@ -797,10 +854,12 @@ Comma separated list of preferred formats for downloading Google docs.
|
|||
|
||||
Comma separated list of preferred formats for uploading Google docs.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: import_formats
|
||||
- Env Var: RCLONE_DRIVE_IMPORT_FORMATS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-allow-import-name-change
|
||||
|
||||
|
@ -808,6 +867,8 @@ Allow the filetype to change when uploading Google docs.
|
|||
|
||||
E.g. file.doc to file.docx. This will confuse sync and reupload every time.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: allow_import_name_change
|
||||
- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
|
||||
- Type: bool
|
||||
|
@ -833,6 +894,8 @@ Photos folder" option in your google drive settings. You can then copy
|
|||
or move the photos locally and use the date the image was taken
|
||||
(created) set as the modification date.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_created_date
|
||||
- Env Var: RCLONE_DRIVE_USE_CREATED_DATE
|
||||
- Type: bool
|
||||
|
@ -848,6 +911,8 @@ unexpected consequences when uploading/downloading files.
|
|||
If both this flag and "--drive-use-created-date" are set, the created
|
||||
date is used.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_shared_date
|
||||
- Env Var: RCLONE_DRIVE_USE_SHARED_DATE
|
||||
- Type: bool
|
||||
|
@ -857,6 +922,8 @@ date is used.
|
|||
|
||||
Size of listing chunk 100-1000, 0 to disable.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: list_chunk
|
||||
- Env Var: RCLONE_DRIVE_LIST_CHUNK
|
||||
- Type: int
|
||||
|
@ -866,15 +933,19 @@ Size of listing chunk 100-1000, 0 to disable.
|
|||
|
||||
Impersonate this user when using a service account.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: impersonate
|
||||
- Env Var: RCLONE_DRIVE_IMPERSONATE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --drive-upload-cutoff
|
||||
|
||||
Cutoff for switching to chunked upload.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
|
@ -891,6 +962,8 @@ is buffered in memory one per transfer.
|
|||
|
||||
Reducing this will reduce memory usage but decrease performance.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_DRIVE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -906,6 +979,8 @@ as malware or spam and cannot be downloaded" with the error code
|
|||
indicate you acknowledge the risks of downloading the file and rclone
|
||||
will download it anyway.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: acknowledge_abuse
|
||||
- Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
|
||||
- Type: bool
|
||||
|
@ -915,6 +990,8 @@ will download it anyway.
|
|||
|
||||
Keep new head revision of each file forever.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: keep_revision_forever
|
||||
- Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
|
||||
- Type: bool
|
||||
|
@ -937,6 +1014,8 @@ doing rclone ls/lsl/lsf/lsjson/etc only.
|
|||
If you do use this flag for syncing (not recommended) then you will
|
||||
need to use --ignore size also.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: size_as_quota
|
||||
- Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA
|
||||
- Type: bool
|
||||
|
@ -946,6 +1025,8 @@ need to use --ignore size also.
|
|||
|
||||
If Object's are greater, use drive v2 API to download.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: v2_download_min_size
|
||||
- Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -955,6 +1036,8 @@ If Object's are greater, use drive v2 API to download.
|
|||
|
||||
Minimum time to sleep between API calls.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pacer_min_sleep
|
||||
- Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP
|
||||
- Type: Duration
|
||||
|
@ -964,6 +1047,8 @@ Minimum time to sleep between API calls.
|
|||
|
||||
Number of API calls to allow without sleeping.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pacer_burst
|
||||
- Env Var: RCLONE_DRIVE_PACER_BURST
|
||||
- Type: int
|
||||
|
@ -978,6 +1063,8 @@ different Google drives. Note that this isn't enabled by default
|
|||
because it isn't easy to tell if it will work between any two
|
||||
configurations.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: server_side_across_configs
|
||||
- Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS
|
||||
- Type: bool
|
||||
|
@ -996,6 +1083,8 @@ See: https://github.com/rclone/rclone/issues/3631
|
|||
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_http2
|
||||
- Env Var: RCLONE_DRIVE_DISABLE_HTTP2
|
||||
- Type: bool
|
||||
|
@ -1017,6 +1106,8 @@ Google don't document so it may break in the future.
|
|||
See: https://github.com/rclone/rclone/issues/3857
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: stop_on_upload_limit
|
||||
- Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT
|
||||
- Type: bool
|
||||
|
@ -1036,6 +1127,8 @@ Note that this detection is relying on error message strings which
|
|||
Google don't document so it may break in the future.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: stop_on_download_limit
|
||||
- Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT
|
||||
- Type: bool
|
||||
|
@ -1050,17 +1143,35 @@ they are the original file (see [the shortcuts section](#shortcuts)).
|
|||
If this flag is set then rclone will ignore shortcut files completely.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: skip_shortcuts
|
||||
- Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --drive-skip-dangling-shortcuts
|
||||
|
||||
If set skip dangling shortcut files.
|
||||
|
||||
If this is set then rclone will not show any dangling shortcuts in listings.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: skip_dangling_shortcuts
|
||||
- Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --drive-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_DRIVE_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
@ -1080,7 +1191,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
|
|||
info on how to pass options and arguments.
|
||||
|
||||
These can be run on a running backend using the rc command
|
||||
[backend/command](/rc/#backend/command).
|
||||
[backend/command](/rc/#backend-command).
|
||||
|
||||
### get
|
||||
|
||||
|
|
|
@ -190,10 +190,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_DROPBOX_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --dropbox-client-secret
|
||||
|
||||
|
@ -201,10 +203,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_DROPBOX_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -214,10 +218,12 @@ Here are the advanced options specific to dropbox (Dropbox).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_DROPBOX_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --dropbox-auth-url
|
||||
|
||||
|
@ -225,10 +231,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_DROPBOX_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --dropbox-token-url
|
||||
|
||||
|
@ -236,10 +244,12 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_DROPBOX_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --dropbox-chunk-size
|
||||
|
||||
|
@ -252,6 +262,8 @@ deal with retries. Setting this larger will increase the speed
|
|||
slightly (at most 10% for 128 MiB in tests) at the cost of using more
|
||||
memory. It can be set smaller if you are tight on memory.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -276,10 +288,12 @@ permissions doesn't include "members.read". This can be added once
|
|||
v1.55 or later is in use everywhere.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: impersonate
|
||||
- Env Var: RCLONE_DROPBOX_IMPERSONATE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --dropbox-shared-files
|
||||
|
||||
|
@ -289,6 +303,8 @@ In this mode rclone's features are extremely limited - only list (ls, lsl, etc.)
|
|||
operations and read operations (e.g. downloading) are supported in this mode.
|
||||
All other operations will be disabled.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: shared_files
|
||||
- Env Var: RCLONE_DROPBOX_SHARED_FILES
|
||||
- Type: bool
|
||||
|
@ -309,6 +325,8 @@ Note that we don't unmount the shared folder afterwards so the
|
|||
--dropbox-shared-folders can be omitted after the first use of a particular
|
||||
shared folder.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: shared_folders
|
||||
- Env Var: RCLONE_DROPBOX_SHARED_FOLDERS
|
||||
- Type: bool
|
||||
|
@ -332,6 +350,8 @@ Rclone will close any outstanding batches when it exits which may make
|
|||
a delay on quit.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: batch_mode
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_MODE
|
||||
- Type: string
|
||||
|
@ -358,6 +378,8 @@ as it will make them a lot quicker. You can use --transfers 32 to
|
|||
maximise throughput.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: batch_size
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_SIZE
|
||||
- Type: int
|
||||
|
@ -378,6 +400,8 @@ default based on the batch_mode in use.
|
|||
- batch_mode: off - not in use
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: batch_timeout
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT
|
||||
- Type: Duration
|
||||
|
@ -387,6 +411,8 @@ default based on the batch_mode in use.
|
|||
|
||||
Max time to wait for a batch to finish comitting
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: batch_commit_timeout
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT
|
||||
- Type: Duration
|
||||
|
@ -394,10 +420,12 @@ Max time to wait for a batch to finish comitting
|
|||
|
||||
#### --dropbox-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_DROPBOX_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -122,10 +122,12 @@ Here are the standard options specific to fichier (1Fichier).
|
|||
|
||||
Your API Key, get it from https://1fichier.com/console/params.pl.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: api_key
|
||||
- Env Var: RCLONE_FICHIER_API_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -135,10 +137,12 @@ Here are the advanced options specific to fichier (1Fichier).
|
|||
|
||||
If you want to download a shared folder, add this parameter.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: shared_folder
|
||||
- Env Var: RCLONE_FICHIER_SHARED_FOLDER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --fichier-file-password
|
||||
|
||||
|
@ -146,10 +150,12 @@ If you want to download a shared file that is password protected, add this param
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: file_password
|
||||
- Env Var: RCLONE_FICHIER_FILE_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --fichier-folder-password
|
||||
|
||||
|
@ -157,17 +163,21 @@ If you want to list the files in a shared folder that is password protected, add
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: folder_password
|
||||
- Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --fichier-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_FICHIER_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -160,10 +160,12 @@ Here are the standard options specific to filefabric (Enterprise File Fabric).
|
|||
|
||||
URL of the Enterprise File Fabric to connect to.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: url
|
||||
- Env Var: RCLONE_FILEFABRIC_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
- Examples:
|
||||
- "https://storagemadeeasy.com"
|
||||
- Storage Made Easy US
|
||||
|
@ -181,10 +183,12 @@ Leave blank normally.
|
|||
Fill in to make rclone start with directory of a given ID.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: root_folder_id
|
||||
- Env Var: RCLONE_FILEFABRIC_ROOT_FOLDER_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --filefabric-permanent-token
|
||||
|
||||
|
@ -200,10 +204,12 @@ These tokens are normally valid for several years.
|
|||
For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: permanent_token
|
||||
- Env Var: RCLONE_FILEFABRIC_PERMANENT_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -219,10 +225,12 @@ usually valid for 1 hour.
|
|||
Don't set this value - rclone will set it automatically.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_FILEFABRIC_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --filefabric-token-expiry
|
||||
|
||||
|
@ -231,10 +239,12 @@ Token expiry time.
|
|||
Don't set this value - rclone will set it automatically.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_expiry
|
||||
- Env Var: RCLONE_FILEFABRIC_TOKEN_EXPIRY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --filefabric-version
|
||||
|
||||
|
@ -243,17 +253,21 @@ Version read from the file fabric.
|
|||
Don't set this value - rclone will set it automatically.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: version
|
||||
- Env Var: RCLONE_FILEFABRIC_VERSION
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --filefabric-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_FILEFABRIC_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -157,7 +157,7 @@ These flags are available for every command.
|
|||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.57.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
```
|
||||
|
||||
|
@ -170,7 +170,7 @@ and may be set in the config file.
|
|||
--acd-auth-url string Auth server URL
|
||||
--acd-client-id string OAuth Client Id
|
||||
--acd-client-secret string OAuth Client Secret
|
||||
--acd-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
|
||||
--acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
|
||||
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
|
||||
--acd-token string OAuth Access Token as a JSON blob
|
||||
--acd-token-url string Token server url
|
||||
|
@ -179,9 +179,9 @@ and may be set in the config file.
|
|||
--azureblob-access-tier string Access tier of blob: hot, cool or archive
|
||||
--azureblob-account string Storage Account Name
|
||||
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
|
||||
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB) (default 4Mi)
|
||||
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
|
||||
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
|
||||
--azureblob-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
|
||||
--azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
|
||||
--azureblob-endpoint string Endpoint for the service
|
||||
--azureblob-key string Storage Account Key
|
||||
--azureblob-list-chunk int Size of blob list (default 5000)
|
||||
|
@ -194,6 +194,7 @@ and may be set in the config file.
|
|||
--azureblob-public-access string Public access level of a container: blob or container
|
||||
--azureblob-sas-url string SAS URL for container level access only
|
||||
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal
|
||||
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
|
||||
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
|
||||
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
|
||||
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
|
||||
|
@ -203,7 +204,7 @@ and may be set in the config file.
|
|||
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
|
||||
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
|
||||
--b2-download-url string Custom endpoint for downloads
|
||||
--b2-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--b2-endpoint string Endpoint for the service
|
||||
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
|
||||
--b2-key string Application Key
|
||||
|
@ -219,7 +220,7 @@ and may be set in the config file.
|
|||
--box-client-id string OAuth Client Id
|
||||
--box-client-secret string OAuth Client Secret
|
||||
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
|
||||
--box-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
|
||||
--box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
|
||||
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
|
||||
--box-owned-by string Only show items owned by the login (email address) passed in
|
||||
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
|
||||
|
@ -256,6 +257,7 @@ and may be set in the config file.
|
|||
--compress-remote string Remote to compress
|
||||
-L, --copy-links Follow symlinks and copy the pointed to item
|
||||
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true)
|
||||
--crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32")
|
||||
--crypt-filename-encryption string How to encrypt the filenames (default "standard")
|
||||
--crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted
|
||||
--crypt-password string Password or pass phrase for encryption (obscured)
|
||||
|
@ -270,8 +272,9 @@ and may be set in the config file.
|
|||
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
|
||||
--drive-client-id string Google Application Client Id
|
||||
--drive-client-secret string OAuth Client Secret
|
||||
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
|
||||
--drive-disable-http2 Disable drive using http2 (default true)
|
||||
--drive-encoding MultiEncoder This sets the encoding for the backend (default InvalidUtf8)
|
||||
--drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
|
||||
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
|
||||
--drive-formats string Deprecated: See export_formats
|
||||
--drive-impersonate string Impersonate this user when using a service account
|
||||
|
@ -288,6 +291,7 @@ and may be set in the config file.
|
|||
--drive-shared-with-me Only show files that are shared with me
|
||||
--drive-size-as-quota Show sizes as storage quota usage, not actual size
|
||||
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
|
||||
--drive-skip-dangling-shortcuts If set skip dangling shortcut files
|
||||
--drive-skip-gdocs Skip google documents in all listings
|
||||
--drive-skip-shortcuts If set skip shortcut files
|
||||
--drive-starred-only Only show files that are starred
|
||||
|
@ -310,40 +314,41 @@ and may be set in the config file.
|
|||
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
|
||||
--dropbox-client-id string OAuth Client Id
|
||||
--dropbox-client-secret string OAuth Client Secret
|
||||
--dropbox-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
|
||||
--dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
|
||||
--dropbox-impersonate string Impersonate this user when using a business account
|
||||
--dropbox-shared-files Instructs rclone to work on individual shared files
|
||||
--dropbox-shared-folders Instructs rclone to work on shared folders
|
||||
--dropbox-token string OAuth Access Token as a JSON blob
|
||||
--dropbox-token-url string Token server url
|
||||
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
|
||||
--fichier-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
|
||||
--fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
|
||||
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
|
||||
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
|
||||
--fichier-shared-folder string If you want to download a shared folder, add this parameter
|
||||
--filefabric-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--filefabric-permanent-token string Permanent Authentication Token
|
||||
--filefabric-root-folder-id string ID of the root folder
|
||||
--filefabric-token string Session Token
|
||||
--filefabric-token-expiry string Token expiry time
|
||||
--filefabric-url string URL of the Enterprise File Fabric to connect to
|
||||
--filefabric-version string Version read from the file fabric
|
||||
--ftp-ask-password Allow asking for FTP password when needed
|
||||
--ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s)
|
||||
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
|
||||
--ftp-disable-epsv Disable using EPSV even if server advertises support
|
||||
--ftp-disable-mlsd Disable using MLSD even if server advertises support
|
||||
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
|
||||
--ftp-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
|
||||
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
|
||||
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
|
||||
--ftp-host string FTP host to connect to
|
||||
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
|
||||
--ftp-no-check-certificate Do not verify the TLS certificate of the server
|
||||
--ftp-pass string FTP password (obscured)
|
||||
--ftp-port string FTP port number (default 21)
|
||||
--ftp-port int FTP port number (default 21)
|
||||
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
|
||||
--ftp-tls Use Implicit FTPS (FTP over TLS)
|
||||
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
|
||||
--ftp-user string FTP username, leave blank for current username, $USER
|
||||
--ftp-user string FTP username (default "$USER")
|
||||
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
|
||||
--gcs-anonymous Access public buckets and objects without credentials
|
||||
--gcs-auth-url string Auth server URL
|
||||
|
@ -351,7 +356,7 @@ and may be set in the config file.
|
|||
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
|
||||
--gcs-client-id string OAuth Client Id
|
||||
--gcs-client-secret string OAuth Client Secret
|
||||
--gcs-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-object-acl string Access Control List for new objects
|
||||
--gcs-project-number string Project number
|
||||
|
@ -362,7 +367,7 @@ and may be set in the config file.
|
|||
--gphotos-auth-url string Auth server URL
|
||||
--gphotos-client-id string OAuth Client Id
|
||||
--gphotos-client-secret string OAuth Client Secret
|
||||
--gphotos-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gphotos-include-archived Also view and download archived media
|
||||
--gphotos-read-only Set to make the Google Photos backend read only
|
||||
--gphotos-read-size Set to read the size of media items
|
||||
|
@ -374,38 +379,39 @@ and may be set in the config file.
|
|||
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
|
||||
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
|
||||
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
|
||||
--hdfs-encoding MultiEncoder This sets the encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
|
||||
--hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
|
||||
--hdfs-namenode string Hadoop name node and port
|
||||
--hdfs-service-principal-name string Kerberos service principal name for the namenode
|
||||
--hdfs-username string Hadoop user name
|
||||
--http-headers CommaSepList Set HTTP headers for all transactions
|
||||
--http-no-head Don't use HEAD requests to find file sizes in dir listing
|
||||
--http-no-head Don't use HEAD requests
|
||||
--http-no-slash Set this if the site doesn't end directories with /
|
||||
--http-url string URL of http host to connect to
|
||||
--hubic-auth-url string Auth server URL
|
||||
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
|
||||
--hubic-client-id string OAuth Client Id
|
||||
--hubic-client-secret string OAuth Client Secret
|
||||
--hubic-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
|
||||
--hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
|
||||
--hubic-no-chunk Don't chunk files during streaming upload
|
||||
--hubic-token string OAuth Access Token as a JSON blob
|
||||
--hubic-token-url string Token server url
|
||||
--jottacloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
|
||||
--jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
|
||||
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
|
||||
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
|
||||
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
|
||||
--jottacloud-trashed-only Only show files that are in the trash
|
||||
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
|
||||
--koofr-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
|
||||
--koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--koofr-endpoint string The Koofr API endpoint to use
|
||||
--koofr-mountid string Mount ID of the mount to use
|
||||
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
|
||||
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
|
||||
--koofr-provider string Choose your storage provider
|
||||
--koofr-setmtime Does the backend support setting modification time (default true)
|
||||
--koofr-user string Your Koofr user name
|
||||
--koofr-user string Your user name
|
||||
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
|
||||
--local-case-insensitive Force the filesystem to report itself as case insensitive
|
||||
--local-case-sensitive Force the filesystem to report itself as case sensitive
|
||||
--local-encoding MultiEncoder This sets the encoding for the backend (default Slash,Dot)
|
||||
--local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
|
||||
--local-no-check-updated Don't check to see if the files change during upload
|
||||
--local-no-preallocate Disable preallocation of disk space for transferred files
|
||||
--local-no-set-modtime Disable setting modtime
|
||||
|
@ -414,7 +420,7 @@ and may be set in the config file.
|
|||
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
|
||||
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
|
||||
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
|
||||
--mailru-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--mailru-pass string Password (obscured)
|
||||
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
|
||||
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
|
||||
|
@ -422,18 +428,23 @@ and may be set in the config file.
|
|||
--mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi)
|
||||
--mailru-user string User name (usually email)
|
||||
--mega-debug Output more debug from Mega
|
||||
--mega-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
|
||||
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
|
||||
--mega-hard-delete Delete files permanently rather than putting them into the trash
|
||||
--mega-pass string Password (obscured)
|
||||
--mega-user string User name
|
||||
--netstorage-account string Set the NetStorage account name
|
||||
--netstorage-host string Domain+path of NetStorage host to connect to
|
||||
--netstorage-protocol string Select between HTTP or HTTPS protocol (default "https")
|
||||
--netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured)
|
||||
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
|
||||
--onedrive-auth-url string Auth server URL
|
||||
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
|
||||
--onedrive-client-id string OAuth Client Id
|
||||
--onedrive-client-secret string OAuth Client Secret
|
||||
--onedrive-disable-site-permission Disable the request for Sites.Read.All permission
|
||||
--onedrive-drive-id string The ID of the drive to use
|
||||
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
|
||||
--onedrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
|
||||
--onedrive-link-password string Set the password for links created by the link command
|
||||
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
|
||||
|
@ -441,27 +452,28 @@ and may be set in the config file.
|
|||
--onedrive-list-chunk int Size of listing chunk (default 1000)
|
||||
--onedrive-no-versions Remove all versions on modifying operations
|
||||
--onedrive-region string Choose national cloud region for OneDrive (default "global")
|
||||
--onedrive-root-folder-id string ID of the root folder
|
||||
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
|
||||
--onedrive-token string OAuth Access Token as a JSON blob
|
||||
--onedrive-token-url string Token server url
|
||||
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
|
||||
--opendrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
|
||||
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
|
||||
--opendrive-password string Password (obscured)
|
||||
--opendrive-username string Username
|
||||
--pcloud-auth-url string Auth server URL
|
||||
--pcloud-client-id string OAuth Client Id
|
||||
--pcloud-client-secret string OAuth Client Secret
|
||||
--pcloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
|
||||
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
|
||||
--pcloud-token string OAuth Access Token as a JSON blob
|
||||
--pcloud-token-url string Token server url
|
||||
--premiumizeme-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--putio-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--qingstor-access-key-id string QingStor Access Key ID
|
||||
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
|
||||
--qingstor-connection-retries int Number of connection retries (default 3)
|
||||
--qingstor-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8)
|
||||
--qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
|
||||
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
|
||||
--qingstor-env-auth Get QingStor credentials from runtime
|
||||
--qingstor-secret-access-key string QingStor Secret Access Key (password)
|
||||
|
@ -476,12 +488,14 @@ and may be set in the config file.
|
|||
--s3-disable-checksum Don't store MD5 checksum with object metadata
|
||||
--s3-disable-http2 Disable usage of http2 for S3 backends
|
||||
--s3-download-url string Custom endpoint for downloads
|
||||
--s3-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
|
||||
--s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
|
||||
--s3-endpoint string Endpoint for S3 API
|
||||
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
|
||||
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
|
||||
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
|
||||
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
|
||||
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
|
||||
--s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto
|
||||
--s3-location-constraint string Location constraint - must be set to match the Region
|
||||
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
|
||||
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
|
||||
|
@ -505,10 +519,11 @@ and may be set in the config file.
|
|||
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
|
||||
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
|
||||
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
|
||||
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
|
||||
--s3-v2-auth If true use v2 authentication
|
||||
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
|
||||
--seafile-create-library Should rclone create a library if it doesn't exist
|
||||
--seafile-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
|
||||
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
|
||||
--seafile-library string Name of the library
|
||||
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
|
||||
--seafile-pass string Password (obscured)
|
||||
|
@ -528,7 +543,7 @@ and may be set in the config file.
|
|||
--sftp-md5sum-command string The command used to read md5 hashes
|
||||
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
|
||||
--sftp-path-override string Override path used by SSH connection
|
||||
--sftp-port string SSH port number (default 22)
|
||||
--sftp-port int SSH port number (default 22)
|
||||
--sftp-pubkey-file string Optional path to public key file
|
||||
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
|
||||
--sftp-set-modtime Set the modified time on the remote if set (default true)
|
||||
|
@ -537,23 +552,28 @@ and may be set in the config file.
|
|||
--sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp")
|
||||
--sftp-use-fstat If set use fstat instead of stat
|
||||
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods
|
||||
--sftp-user string SSH username, leave blank for current username, $USER
|
||||
--sftp-user string SSH username (default "$USER")
|
||||
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
|
||||
--sharefile-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--sharefile-endpoint string Endpoint for API calls
|
||||
--sharefile-root-folder-id string ID of the root folder
|
||||
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
|
||||
--sia-api-password string Sia Daemon API Password (obscured)
|
||||
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
|
||||
--sia-encoding MultiEncoder This sets the encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
|
||||
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
|
||||
--sia-user-agent string Siad User Agent (default "Sia-Agent")
|
||||
--skip-links Don't warn about skipped symlinks
|
||||
--storj-access-grant string Access grant
|
||||
--storj-api-key string API key
|
||||
--storj-passphrase string Encryption passphrase
|
||||
--storj-provider string Choose an authentication method (default "existing")
|
||||
--storj-satellite-address string Satellite address (default "us-central-1.storj.io")
|
||||
--sugarsync-access-key-id string Sugarsync Access Key ID
|
||||
--sugarsync-app-id string Sugarsync App ID
|
||||
--sugarsync-authorization string Sugarsync authorization
|
||||
--sugarsync-authorization-expiry string Sugarsync authorization expiry
|
||||
--sugarsync-deleted-id string Sugarsync deleted folder id
|
||||
--sugarsync-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
|
||||
--sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
|
||||
--sugarsync-hard-delete Permanently delete files if true
|
||||
--sugarsync-private-access-key string Sugarsync Private Access Key
|
||||
--sugarsync-refresh-token string Sugarsync refresh token
|
||||
|
@ -567,7 +587,7 @@ and may be set in the config file.
|
|||
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
|
||||
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
|
||||
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
|
||||
--swift-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
|
||||
--swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
|
||||
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
|
||||
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
|
||||
--swift-key string API key or password (OS_PASSWORD)
|
||||
|
@ -581,21 +601,16 @@ and may be set in the config file.
|
|||
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
|
||||
--swift-user string User name to log in (OS_USERNAME)
|
||||
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
|
||||
--tardigrade-access-grant string Access grant
|
||||
--tardigrade-api-key string API key
|
||||
--tardigrade-passphrase string Encryption passphrase
|
||||
--tardigrade-provider string Choose an authentication method (default "existing")
|
||||
--tardigrade-satellite-address string Satellite address (default "us-central-1.tardigrade.io")
|
||||
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
|
||||
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
|
||||
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
|
||||
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
|
||||
--union-upstreams string List of space separated upstreams
|
||||
--uptobox-access-token string Your access token
|
||||
--uptobox-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
|
||||
--uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
|
||||
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
|
||||
--webdav-bearer-token-command string Command to run to get a bearer token
|
||||
--webdav-encoding string This sets the encoding for the backend
|
||||
--webdav-encoding string The encoding for the backend
|
||||
--webdav-headers CommaSepList Set HTTP headers for all transactions
|
||||
--webdav-pass string Password (obscured)
|
||||
--webdav-url string URL of http host to connect to
|
||||
|
@ -604,13 +619,14 @@ and may be set in the config file.
|
|||
--yandex-auth-url string Auth server URL
|
||||
--yandex-client-id string OAuth Client Id
|
||||
--yandex-client-secret string OAuth Client Secret
|
||||
--yandex-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--yandex-hard-delete Delete files permanently rather than putting them into the trash
|
||||
--yandex-token string OAuth Access Token as a JSON blob
|
||||
--yandex-token-url string Token server url
|
||||
--zoho-auth-url string Auth server URL
|
||||
--zoho-client-id string OAuth Client Id
|
||||
--zoho-client-secret string OAuth Client Secret
|
||||
--zoho-encoding MultiEncoder This sets the encoding for the backend (default Del,Ctl,InvalidUtf8)
|
||||
--zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
|
||||
--zoho-region string Zoho region to connect to
|
||||
--zoho-token string OAuth Access Token as a JSON blob
|
||||
--zoho-token-url string Token server url
|
||||
|
|
|
@ -146,28 +146,34 @@ FTP host to connect to.
|
|||
|
||||
E.g. "ftp.example.com".
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: host
|
||||
- Env Var: RCLONE_FTP_HOST
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --ftp-user
|
||||
|
||||
FTP username, leave blank for current username, $USER.
|
||||
FTP username.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user
|
||||
- Env Var: RCLONE_FTP_USER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Default: "$USER"
|
||||
|
||||
#### --ftp-port
|
||||
|
||||
FTP port, leave blank to use default (21).
|
||||
FTP port number.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: port
|
||||
- Env Var: RCLONE_FTP_PORT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Type: int
|
||||
- Default: 21
|
||||
|
||||
#### --ftp-pass
|
||||
|
||||
|
@ -175,19 +181,12 @@ FTP password.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pass
|
||||
- Env Var: RCLONE_FTP_PASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
|
||||
#### --ftp-ask-password
|
||||
|
||||
Ask for password when connecting to a FTP server and no password is configured.
|
||||
|
||||
- Config: ask_password
|
||||
- Env Var: RCLONE_FTP_ASK_PASSWORD
|
||||
- Type: bool
|
||||
- Default: false
|
||||
- Required: false
|
||||
|
||||
#### --ftp-tls
|
||||
|
||||
|
@ -198,6 +197,8 @@ right from the start which breaks compatibility with
|
|||
non-TLS-aware servers. This is usually served over port 990 rather
|
||||
than port 21. Cannot be used in combination with explicit FTP.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: tls
|
||||
- Env Var: RCLONE_FTP_TLS
|
||||
- Type: bool
|
||||
|
@ -211,6 +212,8 @@ When using explicit FTP over TLS the client explicitly requests
|
|||
security from the server in order to upgrade a plain text connection
|
||||
to an encrypted one. Cannot be used in combination with implicit FTP.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: explicit_tls
|
||||
- Env Var: RCLONE_FTP_EXPLICIT_TLS
|
||||
- Type: bool
|
||||
|
@ -224,6 +227,8 @@ Here are the advanced options specific to ftp (FTP Connection).
|
|||
|
||||
Maximum number of FTP simultaneous connections, 0 for unlimited.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: concurrency
|
||||
- Env Var: RCLONE_FTP_CONCURRENCY
|
||||
- Type: int
|
||||
|
@ -233,6 +238,8 @@ Maximum number of FTP simultaneous connections, 0 for unlimited.
|
|||
|
||||
Do not verify the TLS certificate of the server.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_check_certificate
|
||||
- Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE
|
||||
- Type: bool
|
||||
|
@ -242,6 +249,8 @@ Do not verify the TLS certificate of the server.
|
|||
|
||||
Disable using EPSV even if server advertises support.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_epsv
|
||||
- Env Var: RCLONE_FTP_DISABLE_EPSV
|
||||
- Type: bool
|
||||
|
@ -251,6 +260,8 @@ Disable using EPSV even if server advertises support.
|
|||
|
||||
Disable using MLSD even if server advertises support.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_mlsd
|
||||
- Env Var: RCLONE_FTP_DISABLE_MLSD
|
||||
- Type: bool
|
||||
|
@ -260,6 +271,8 @@ Disable using MLSD even if server advertises support.
|
|||
|
||||
Use MDTM to set modification time (VsFtpd quirk)
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: writing_mdtm
|
||||
- Env Var: RCLONE_FTP_WRITING_MDTM
|
||||
- Type: bool
|
||||
|
@ -275,6 +288,8 @@ given, rclone will empty the connection pool.
|
|||
Set to 0 to keep connections indefinitely.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: idle_timeout
|
||||
- Env Var: RCLONE_FTP_IDLE_TIMEOUT
|
||||
- Type: Duration
|
||||
|
@ -284,6 +299,8 @@ Set to 0 to keep connections indefinitely.
|
|||
|
||||
Maximum time to wait for a response to close.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: close_timeout
|
||||
- Env Var: RCLONE_FTP_CLOSE_TIMEOUT
|
||||
- Type: Duration
|
||||
|
@ -297,6 +314,8 @@ TLS cache allows to resume TLS sessions and reuse PSK between connections.
|
|||
Increase if default size is not enough resulting in TLS resumption errors.
|
||||
Enabled by default. Use 0 to disable.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: tls_cache_size
|
||||
- Env Var: RCLONE_FTP_TLS_CACHE_SIZE
|
||||
- Type: int
|
||||
|
@ -306,6 +325,8 @@ Enabled by default. Use 0 to disable.
|
|||
|
||||
Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_tls13
|
||||
- Env Var: RCLONE_FTP_DISABLE_TLS13
|
||||
- Type: bool
|
||||
|
@ -315,17 +336,35 @@ Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
|
|||
|
||||
Maximum time to wait for data connection closing status.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: shut_timeout
|
||||
- Env Var: RCLONE_FTP_SHUT_TIMEOUT
|
||||
- Type: Duration
|
||||
- Default: 1m0s
|
||||
|
||||
#### --ftp-ask-password
|
||||
|
||||
Allow asking for FTP password when needed.
|
||||
|
||||
If this is set and no password is supplied then rclone will ask for a password
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: ask_password
|
||||
- Env Var: RCLONE_FTP_ASK_PASSWORD
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --ftp-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_FTP_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -281,10 +281,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_GCS_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gcs-client-secret
|
||||
|
||||
|
@ -292,10 +294,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_GCS_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gcs-project-number
|
||||
|
||||
|
@ -303,10 +307,12 @@ Project number.
|
|||
|
||||
Optional - needed only for list/create/delete buckets - see your developer console.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: project_number
|
||||
- Env Var: RCLONE_GCS_PROJECT_NUMBER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gcs-service-account-file
|
||||
|
||||
|
@ -317,10 +323,12 @@ Needed only if you want use SA instead of interactive login.
|
|||
|
||||
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: service_account_file
|
||||
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gcs-service-account-credentials
|
||||
|
||||
|
@ -329,10 +337,12 @@ Service Account Credentials JSON blob.
|
|||
Leave blank normally.
|
||||
Needed only if you want use SA instead of interactive login.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: service_account_credentials
|
||||
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gcs-anonymous
|
||||
|
||||
|
@ -340,6 +350,8 @@ Access public buckets and objects without credentials.
|
|||
|
||||
Set to 'true' if you just want to download files and don't configure credentials.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: anonymous
|
||||
- Env Var: RCLONE_GCS_ANONYMOUS
|
||||
- Type: bool
|
||||
|
@ -349,10 +361,12 @@ Set to 'true' if you just want to download files and don't configure credentials
|
|||
|
||||
Access Control List for new objects.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: object_acl
|
||||
- Env Var: RCLONE_GCS_OBJECT_ACL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "authenticatedRead"
|
||||
- Object owner gets OWNER access.
|
||||
|
@ -377,10 +391,12 @@ Access Control List for new objects.
|
|||
|
||||
Access Control List for new buckets.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: bucket_acl
|
||||
- Env Var: RCLONE_GCS_BUCKET_ACL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "authenticatedRead"
|
||||
- Project team owners get OWNER access.
|
||||
|
@ -413,6 +429,8 @@ When it is set, rclone:
|
|||
Docs: https://cloud.google.com/storage/docs/bucket-policy-only
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: bucket_policy_only
|
||||
- Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY
|
||||
- Type: bool
|
||||
|
@ -422,10 +440,12 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
|
|||
|
||||
Location for the newly created buckets.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: location
|
||||
- Env Var: RCLONE_GCS_LOCATION
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Empty for default location (US)
|
||||
|
@ -441,12 +461,22 @@ Location for the newly created buckets.
|
|||
- Hong Kong
|
||||
- "asia-northeast1"
|
||||
- Tokyo
|
||||
- "asia-northeast2"
|
||||
- Osaka
|
||||
- "asia-northeast3"
|
||||
- Seoul
|
||||
- "asia-south1"
|
||||
- Mumbai
|
||||
- "asia-south2"
|
||||
- Delhi
|
||||
- "asia-southeast1"
|
||||
- Singapore
|
||||
- "asia-southeast2"
|
||||
- Jakarta
|
||||
- "australia-southeast1"
|
||||
- Sydney
|
||||
- "australia-southeast2"
|
||||
- Melbourne
|
||||
- "europe-north1"
|
||||
- Finland
|
||||
- "europe-west1"
|
||||
|
@ -457,6 +487,10 @@ Location for the newly created buckets.
|
|||
- Frankfurt
|
||||
- "europe-west4"
|
||||
- Netherlands
|
||||
- "europe-west6"
|
||||
- Zürich
|
||||
- "europe-central2"
|
||||
- Warsaw
|
||||
- "us-central1"
|
||||
- Iowa
|
||||
- "us-east1"
|
||||
|
@ -467,15 +501,35 @@ Location for the newly created buckets.
|
|||
- Oregon
|
||||
- "us-west2"
|
||||
- California
|
||||
- "us-west3"
|
||||
- Salt Lake City
|
||||
- "us-west4"
|
||||
- Las Vegas
|
||||
- "northamerica-northeast1"
|
||||
- Montréal
|
||||
- "northamerica-northeast2"
|
||||
- Toronto
|
||||
- "southamerica-east1"
|
||||
- São Paulo
|
||||
- "southamerica-west1"
|
||||
- Santiago
|
||||
- "asia1"
|
||||
- Dual region: asia-northeast1 and asia-northeast2.
|
||||
- "eur4"
|
||||
- Dual region: europe-north1 and europe-west4.
|
||||
- "nam4"
|
||||
- Dual region: us-central1 and us-east1.
|
||||
|
||||
#### --gcs-storage-class
|
||||
|
||||
The storage class to use when storing objects in Google Cloud Storage.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_GCS_STORAGE_CLASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Default
|
||||
|
@ -500,10 +554,12 @@ Here are the advanced options specific to google cloud storage (Google Cloud Sto
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_GCS_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gcs-auth-url
|
||||
|
||||
|
@ -511,10 +567,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_GCS_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gcs-token-url
|
||||
|
||||
|
@ -522,17 +580,21 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_GCS_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gcs-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_GCS_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -232,10 +232,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_GPHOTOS_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gphotos-client-secret
|
||||
|
||||
|
@ -243,10 +245,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_GPHOTOS_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gphotos-read-only
|
||||
|
||||
|
@ -255,6 +259,8 @@ Set to make the Google Photos backend read only.
|
|||
If you choose read only then rclone will only request read only access
|
||||
to your photos, otherwise rclone will request full access.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: read_only
|
||||
- Env Var: RCLONE_GPHOTOS_READ_ONLY
|
||||
- Type: bool
|
||||
|
@ -268,10 +274,12 @@ Here are the advanced options specific to google photos (Google Photos).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_GPHOTOS_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gphotos-auth-url
|
||||
|
||||
|
@ -279,10 +287,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_GPHOTOS_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gphotos-token-url
|
||||
|
||||
|
@ -290,10 +300,12 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_GPHOTOS_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --gphotos-read-size
|
||||
|
||||
|
@ -305,6 +317,8 @@ rclone mount needs to know the size of files in advance of reading
|
|||
them, so setting this flag when using rclone mount is recommended if
|
||||
you want to read the media.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: read_size
|
||||
- Env Var: RCLONE_GPHOTOS_READ_SIZE
|
||||
- Type: bool
|
||||
|
@ -314,6 +328,8 @@ you want to read the media.
|
|||
|
||||
Year limits the photos to be downloaded to those which are uploaded after the given year.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: start_year
|
||||
- Env Var: RCLONE_GPHOTOS_START_YEAR
|
||||
- Type: int
|
||||
|
@ -323,7 +339,7 @@ Year limits the photos to be downloaded to those which are uploaded after the gi
|
|||
|
||||
Also view and download archived media.
|
||||
|
||||
By default rclone does not request archived media. Thus, when syncing,
|
||||
By default, rclone does not request archived media. Thus, when syncing,
|
||||
archived media is not visible in directory listings or transferred.
|
||||
|
||||
Note that media in albums is always visible and synced, no matter
|
||||
|
@ -335,6 +351,8 @@ listings and transferred.
|
|||
Without this flag, archived media will not be visible in directory
|
||||
listings and won't be transferred.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: include_archived
|
||||
- Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED
|
||||
- Type: bool
|
||||
|
@ -342,10 +360,12 @@ listings and won't be transferred.
|
|||
|
||||
#### --gphotos-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_GPHOTOS_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -178,15 +178,19 @@ Here are the standard options specific to hasher (Better checksums for other rem
|
|||
|
||||
Remote to cache checksums for (e.g. myRemote:path).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: remote
|
||||
- Env Var: RCLONE_HASHER_REMOTE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --hasher-hashes
|
||||
|
||||
Comma separated list of supported checksum types.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hashes
|
||||
- Env Var: RCLONE_HASHER_HASHES
|
||||
- Type: CommaSepList
|
||||
|
@ -196,6 +200,8 @@ Comma separated list of supported checksum types.
|
|||
|
||||
Maximum time to keep checksums in cache (0 = no cache, off = cache forever).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: max_age
|
||||
- Env Var: RCLONE_HASHER_MAX_AGE
|
||||
- Type: Duration
|
||||
|
@ -209,6 +215,8 @@ Here are the advanced options specific to hasher (Better checksums for other rem
|
|||
|
||||
Auto-update checksum for files smaller than this size (disabled by default).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auto_size
|
||||
- Env Var: RCLONE_HASHER_AUTO_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -228,7 +236,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
|
|||
info on how to pass options and arguments.
|
||||
|
||||
These can be run on a running backend using the rc command
|
||||
[backend/command](/rc/#backend/command).
|
||||
[backend/command](/rc/#backend-command).
|
||||
|
||||
### drop
|
||||
|
||||
|
|
|
@ -159,19 +159,23 @@ Hadoop name node and port.
|
|||
|
||||
E.g. "namenode:8020" to connect to host namenode at port 8020.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: namenode
|
||||
- Env Var: RCLONE_HDFS_NAMENODE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --hdfs-username
|
||||
|
||||
Hadoop user name.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: username
|
||||
- Env Var: RCLONE_HDFS_USERNAME
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "root"
|
||||
- Connect to hdfs as root.
|
||||
|
@ -188,10 +192,12 @@ Enables KERBEROS authentication. Specifies the Service Principal Name
|
|||
(SERVICE/FQDN) for the namenode. E.g. \"hdfs/namenode.hadoop.docker\"
|
||||
for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: service_principal_name
|
||||
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --hdfs-data-transfer-protection
|
||||
|
||||
|
@ -202,20 +208,24 @@ checks, and wire encryption is required when communicating the the
|
|||
datanodes. Possible values are 'authentication', 'integrity' and
|
||||
'privacy'. Used only with KERBEROS enabled.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: data_transfer_protection
|
||||
- Env Var: RCLONE_HDFS_DATA_TRANSFER_PROTECTION
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "privacy"
|
||||
- Ensure authentication, integrity and encryption enabled.
|
||||
|
||||
#### --hdfs-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_HDFS_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -134,10 +134,12 @@ URL of http host to connect to.
|
|||
|
||||
E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: url
|
||||
- Env Var: RCLONE_HTTP_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -152,10 +154,11 @@ Use this to set additional HTTP headers for all transactions.
|
|||
The input format is comma separated list of key,value pairs. Standard
|
||||
[CSV encoding](https://godoc.org/encoding/csv) may be used.
|
||||
|
||||
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
|
||||
For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
|
||||
|
||||
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: headers
|
||||
- Env Var: RCLONE_HTTP_HEADERS
|
||||
|
@ -177,6 +180,8 @@ URLs from them rather than downloading them.
|
|||
Note that this may cause rclone to confuse genuine HTML files with
|
||||
directories.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_slash
|
||||
- Env Var: RCLONE_HTTP_NO_SLASH
|
||||
- Type: bool
|
||||
|
@ -184,8 +189,9 @@ directories.
|
|||
|
||||
#### --http-no-head
|
||||
|
||||
Don't use HEAD requests to find file sizes in dir listing.
|
||||
Don't use HEAD requests.
|
||||
|
||||
HEAD requests are mainly used to find file sizes in dir listing.
|
||||
If your site is being very slow to load then you can try this option.
|
||||
Normally rclone does a HEAD request for each potential file in a
|
||||
directory listing to:
|
||||
|
@ -194,12 +200,11 @@ directory listing to:
|
|||
- check it really exists
|
||||
- check to see if it is a directory
|
||||
|
||||
If you set this option, rclone will not do the HEAD request. This will mean
|
||||
|
||||
- directory listings are much quicker
|
||||
- rclone won't have the times or sizes of any files
|
||||
- some files that don't exist may be in the listing
|
||||
If you set this option, rclone will not do the HEAD request. This will mean
|
||||
that directory listings are much quicker, but rclone won't have the times or
|
||||
sizes of any files, and some files that don't exist may be in the listing.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_head
|
||||
- Env Var: RCLONE_HTTP_NO_HEAD
|
||||
|
|
|
@ -117,10 +117,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_HUBIC_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --hubic-client-secret
|
||||
|
||||
|
@ -128,10 +130,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_HUBIC_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -141,10 +145,12 @@ Here are the advanced options specific to hubic (Hubic).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_HUBIC_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --hubic-auth-url
|
||||
|
||||
|
@ -152,10 +158,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_HUBIC_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --hubic-token-url
|
||||
|
||||
|
@ -163,10 +171,12 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_HUBIC_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --hubic-chunk-size
|
||||
|
||||
|
@ -175,6 +185,8 @@ Above this size files will be chunked into a _segments container.
|
|||
Above this size files will be chunked into a _segments container. The
|
||||
default for this is 5 GiB which is its maximum value.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_HUBIC_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -193,6 +205,8 @@ files are easier to deal with and have an MD5SUM.
|
|||
Rclone will still chunk files bigger than chunk_size when doing normal
|
||||
copy operations.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_chunk
|
||||
- Env Var: RCLONE_HUBIC_NO_CHUNK
|
||||
- Type: bool
|
||||
|
@ -200,10 +214,12 @@ copy operations.
|
|||
|
||||
#### --hubic-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_HUBIC_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -241,6 +241,8 @@ Here are the advanced options specific to jottacloud (Jottacloud).
|
|||
|
||||
Files bigger than this will be cached on disk to calculate the MD5 if required.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: md5_memory_limit
|
||||
- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
|
||||
- Type: SizeSuffix
|
||||
|
@ -252,6 +254,8 @@ Only show files that are in the trash.
|
|||
|
||||
This will show trashed files in their original directory structure.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: trashed_only
|
||||
- Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY
|
||||
- Type: bool
|
||||
|
@ -261,6 +265,8 @@ This will show trashed files in their original directory structure.
|
|||
|
||||
Delete files permanently rather than putting them into the trash.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hard_delete
|
||||
- Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
|
||||
- Type: bool
|
||||
|
@ -270,6 +276,8 @@ Delete files permanently rather than putting them into the trash.
|
|||
|
||||
Files bigger than this can be resumed if the upload fail's.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_resume_limit
|
||||
- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
|
||||
- Type: SizeSuffix
|
||||
|
@ -279,6 +287,8 @@ Files bigger than this can be resumed if the upload fail's.
|
|||
|
||||
Avoid server side versioning by deleting files and recreating files instead of overwriting them.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_versions
|
||||
- Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS
|
||||
- Type: bool
|
||||
|
@ -286,10 +296,12 @@ Avoid server side versioning by deleting files and recreating files instead of o
|
|||
|
||||
#### --jottacloud-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_JOTTACLOUD_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -333,10 +333,12 @@ Here are the advanced options specific to local (Local Disk).
|
|||
|
||||
Disable UNC (long path names) conversion on Windows.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: nounc
|
||||
- Env Var: RCLONE_LOCAL_NOUNC
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "true"
|
||||
- Disables long file names.
|
||||
|
@ -345,6 +347,8 @@ Disable UNC (long path names) conversion on Windows.
|
|||
|
||||
Follow symlinks and copy the pointed to item.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: copy_links
|
||||
- Env Var: RCLONE_LOCAL_COPY_LINKS
|
||||
- Type: bool
|
||||
|
@ -354,6 +358,8 @@ Follow symlinks and copy the pointed to item.
|
|||
|
||||
Translate symlinks to/from regular files with a '.rclonelink' extension.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: links
|
||||
- Env Var: RCLONE_LOCAL_LINKS
|
||||
- Type: bool
|
||||
|
@ -366,6 +372,8 @@ Don't warn about skipped symlinks.
|
|||
This flag disables warning messages on skipped symlinks or junction
|
||||
points, as you explicitly acknowledge that they should be skipped.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: skip_links
|
||||
- Env Var: RCLONE_LOCAL_SKIP_LINKS
|
||||
- Type: bool
|
||||
|
@ -384,6 +392,8 @@ Rclone used to use the Stat size of links as the link size, but this fails in qu
|
|||
So rclone now always reads the link.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: zero_size_links
|
||||
- Env Var: RCLONE_LOCAL_ZERO_SIZE_LINKS
|
||||
- Type: bool
|
||||
|
@ -406,6 +416,8 @@ some OSes.
|
|||
Note that rclone compares filenames with unicode normalization in the sync
|
||||
routine so this flag shouldn't normally be used.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: unicode_normalization
|
||||
- Env Var: RCLONE_LOCAL_UNICODE_NORMALIZATION
|
||||
- Type: bool
|
||||
|
@ -440,6 +452,8 @@ time we:
|
|||
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_check_updated
|
||||
- Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
|
||||
- Type: bool
|
||||
|
@ -449,6 +463,8 @@ time we:
|
|||
|
||||
Don't cross filesystem boundaries (unix/macOS only).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: one_file_system
|
||||
- Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
|
||||
- Type: bool
|
||||
|
@ -462,6 +478,8 @@ Normally the local backend declares itself as case insensitive on
|
|||
Windows/macOS and case sensitive for everything else. Use this flag
|
||||
to override the default choice.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: case_sensitive
|
||||
- Env Var: RCLONE_LOCAL_CASE_SENSITIVE
|
||||
- Type: bool
|
||||
|
@ -475,6 +493,8 @@ Normally the local backend declares itself as case insensitive on
|
|||
Windows/macOS and case sensitive for everything else. Use this flag
|
||||
to override the default choice.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: case_insensitive
|
||||
- Env Var: RCLONE_LOCAL_CASE_INSENSITIVE
|
||||
- Type: bool
|
||||
|
@ -490,6 +510,8 @@ Stream) may incorrectly set the actual file size equal to the
|
|||
preallocated space, causing checksum and file size checks to fail.
|
||||
Use this flag to disable preallocation.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_preallocate
|
||||
- Env Var: RCLONE_LOCAL_NO_PREALLOCATE
|
||||
- Type: bool
|
||||
|
@ -504,6 +526,8 @@ multi-thread downloads. This avoids long pauses on large files where
|
|||
the OS zeros the file. However sparse files may be undesirable as they
|
||||
cause disk fragmentation and can be slow to work with.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_sparse
|
||||
- Env Var: RCLONE_LOCAL_NO_SPARSE
|
||||
- Type: bool
|
||||
|
@ -519,6 +543,8 @@ the user rclone is running as does not own the file uploaded, such as
|
|||
when copying to a CIFS mount owned by another user. If this option is
|
||||
enabled, rclone will no longer update the modtime after copying a file.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_set_modtime
|
||||
- Env Var: RCLONE_LOCAL_NO_SET_MODTIME
|
||||
- Type: bool
|
||||
|
@ -526,10 +552,12 @@ enabled, rclone will no longer update the modtime after copying a file.
|
|||
|
||||
#### --local-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_LOCAL_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
@ -549,7 +577,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
|
|||
info on how to pass options and arguments.
|
||||
|
||||
These can be run on a running backend using the rc command
|
||||
[backend/command](/rc/#backend/command).
|
||||
[backend/command](/rc/#backend-command).
|
||||
|
||||
### noop
|
||||
|
||||
|
|
|
@ -162,10 +162,12 @@ Here are the standard options specific to mailru (Mail.ru Cloud).
|
|||
|
||||
User name (usually email).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user
|
||||
- Env Var: RCLONE_MAILRU_USER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --mailru-pass
|
||||
|
||||
|
@ -173,10 +175,12 @@ Password.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pass
|
||||
- Env Var: RCLONE_MAILRU_PASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --mailru-speedup-enable
|
||||
|
||||
|
@ -191,6 +195,8 @@ content hash in advance and decide whether full upload is required.
|
|||
Also, if rclone does not know file size in advance (e.g. in case of
|
||||
streaming or partial uploads), it will not even try this optimization.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: speedup_enable
|
||||
- Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE
|
||||
- Type: bool
|
||||
|
@ -211,6 +217,8 @@ Comma separated list of file name patterns eligible for speedup (put by hash).
|
|||
|
||||
Patterns are case insensitive and can contain '*' or '?' meta characters.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: speedup_file_patterns
|
||||
- Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS
|
||||
- Type: string
|
||||
|
@ -231,6 +239,8 @@ This option allows you to disable speedup (put by hash) for large files.
|
|||
|
||||
Reason is that preliminary hashing can exhaust your RAM or disk space.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: speedup_max_disk
|
||||
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
|
||||
- Type: SizeSuffix
|
||||
|
@ -247,6 +257,8 @@ Reason is that preliminary hashing can exhaust your RAM or disk space.
|
|||
|
||||
Files larger than the size given below will always be hashed on disk.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: speedup_max_memory
|
||||
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
|
||||
- Type: SizeSuffix
|
||||
|
@ -263,6 +275,8 @@ Files larger than the size given below will always be hashed on disk.
|
|||
|
||||
What should copy do if file checksum is mismatched or invalid.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: check_hash
|
||||
- Env Var: RCLONE_MAILRU_CHECK_HASH
|
||||
- Type: bool
|
||||
|
@ -279,10 +293,12 @@ HTTP user agent used internally by client.
|
|||
|
||||
Defaults to "rclone/VERSION" or "--user-agent" provided on command line.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user_agent
|
||||
- Env Var: RCLONE_MAILRU_USER_AGENT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --mailru-quirks
|
||||
|
||||
|
@ -294,17 +310,21 @@ flags is not documented and not guaranteed to persist between releases.
|
|||
Quirks will be removed when the backend grows stable.
|
||||
Supported quirks: atomicmkdir binlist unknowndirs
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: quirks
|
||||
- Env Var: RCLONE_MAILRU_QUIRKS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --mailru-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_MAILRU_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -160,10 +160,12 @@ Here are the standard options specific to mega (Mega).
|
|||
|
||||
User name.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user
|
||||
- Env Var: RCLONE_MEGA_USER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --mega-pass
|
||||
|
||||
|
@ -171,10 +173,12 @@ Password.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pass
|
||||
- Env Var: RCLONE_MEGA_PASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -187,6 +191,8 @@ Output more debug from Mega.
|
|||
If this flag is set (along with -vv) it will print further debugging
|
||||
information from the mega backend.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: debug
|
||||
- Env Var: RCLONE_MEGA_DEBUG
|
||||
- Type: bool
|
||||
|
@ -200,6 +206,8 @@ Normally the mega backend will put all deletions into the trash rather
|
|||
than permanently deleting them. If you specify this then rclone will
|
||||
permanently delete objects instead.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hard_delete
|
||||
- Env Var: RCLONE_MEGA_HARD_DELETE
|
||||
- Type: bool
|
||||
|
@ -207,10 +215,12 @@ permanently delete objects instead.
|
|||
|
||||
#### --mega-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_MEGA_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
276
docs/content/netstorage.md
Normal file
276
docs/content/netstorage.md
Normal file
|
@ -0,0 +1,276 @@
|
|||
---
|
||||
title: "Akamai Netstorage"
|
||||
description: "Rclone docs for Akamai NetStorage"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-database" >}} Akamai NetStorage
|
||||
-------------------------------------------------
|
||||
|
||||
Paths are specified as `remote:`
|
||||
You may put subdirectories in too, e.g. `remote:/path/to/dir`.
|
||||
If you have a CP code you can use that as the folder after the domain such as \<domain>\/\<cpcode>\/\<internal directories within cpcode>.
|
||||
|
||||
For example, this is commonly configured with or without a CP code:
|
||||
* **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/`
|
||||
* **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net`
|
||||
|
||||
|
||||
See all buckets
|
||||
rclone lsd remote:
|
||||
The initial setup for Netstorage involves getting an account and secret. Use `rclone config` to walk you through the setup process.
|
||||
|
||||
Here's an example of how to make a remote called `ns1`.
|
||||
|
||||
1. To begin the interactive configuration process, enter this command:
|
||||
|
||||
```
|
||||
rclone config
|
||||
```
|
||||
|
||||
2. Type `n` to create a new remote.
|
||||
|
||||
```
|
||||
n) New remote
|
||||
d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> n
|
||||
```
|
||||
|
||||
3. For this example, enter `ns1` when you reach the name> prompt.
|
||||
|
||||
```
|
||||
name> ns1
|
||||
```
|
||||
|
||||
4. Enter `netstorage` as the type of storage to configure.
|
||||
|
||||
```
|
||||
Type of storage to configure.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
Choose a number from below, or type in your own value
|
||||
XX / NetStorage
|
||||
\ "netstorage"
|
||||
Storage> netstorage
|
||||
```
|
||||
|
||||
5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.
|
||||
|
||||
|
||||
```
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
Choose a number from below, or type in your own value
|
||||
1 / HTTP protocol
|
||||
\ "http"
|
||||
2 / HTTPS protocol
|
||||
\ "https"
|
||||
protocol> 1
|
||||
```
|
||||
|
||||
6. Specify your NetStorage host, CP code, and any necessary content paths using this format: `<domain>/<cpcode>/<content>/`
|
||||
|
||||
```
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
host> baseball-nsu.akamaihd.net/123456/content/
|
||||
```
|
||||
|
||||
7. Set the netstorage account name
|
||||
```
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
account> username
|
||||
```
|
||||
|
||||
8. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the `y` option to set your own password then enter your secret.
|
||||
Note: The secret is stored in the `rclone.conf` file with hex-encoded encryption.
|
||||
|
||||
```
|
||||
y) Yes type in my own password
|
||||
g) Generate random password
|
||||
y/g> y
|
||||
Enter the password:
|
||||
password:
|
||||
Confirm the password:
|
||||
password:
|
||||
```
|
||||
|
||||
9. View the summary and confirm your remote configuration.
|
||||
|
||||
```
|
||||
[ns1]
|
||||
type = netstorage
|
||||
protocol = http
|
||||
host = baseball-nsu.akamaihd.net/123456/content/
|
||||
account = username
|
||||
secret = *** ENCRYPTED ***
|
||||
--------------------
|
||||
y) Yes this is OK (default)
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
This remote is called `ns1` and can now be used.
|
||||
|
||||
### Example operations
|
||||
Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/.
|
||||
|
||||
##### See contents of a directory in your project
|
||||
|
||||
rclone lsd ns1:/974012/testing/
|
||||
|
||||
##### Sync the contents local with remote
|
||||
|
||||
rclone sync . ns1:/974012/testing/
|
||||
|
||||
##### Upload local content to remote
|
||||
rclone copy notes.txt ns1:/974012/testing/
|
||||
|
||||
##### Delete content on remote
|
||||
rclone delete ns1:/974012/testing/notes.txt
|
||||
|
||||
##### Move or copy content between CP codes.
|
||||
Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes.
|
||||
|
||||
rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
|
||||
|
||||
|
||||
### Symlink Support
|
||||
|
||||
The Netstorage backend changes the rclone `--links, -l` behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote.
|
||||
|
||||
This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below.
|
||||
|
||||
Individual symlink files on the remote can be used with the commands like "cat" to print the destination name, or "delete" to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink.
|
||||
|
||||
**Note**: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote.
|
||||
|
||||
### Implicit vs. Explicit Directories
|
||||
|
||||
With NetStorage, directories can exist in one of two forms:
|
||||
|
||||
1. **Explicit Directory**. This is an actual, physical directory that you have created in a storage group.
|
||||
2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, non-existent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
|
||||
|
||||
Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.
|
||||
|
||||
### ListR Feature
|
||||
|
||||
NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered.
|
||||
|
||||
* **Rclone will use the ListR method for some commands by default**. Commands such as `lsf -R` will use ListR by default. To disable this, include the `--disable listR` option to use the non-recursive method of listing objects.
|
||||
|
||||
* **Rclone will not use the ListR method for some commands**. Commands such as `sync` don't use ListR by default. To force using the ListR method, include the `--fast-list` option.
|
||||
|
||||
There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster.
|
||||
|
||||
**Note**: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output.
|
||||
|
||||
### Purge Feature
|
||||
|
||||
NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
|
||||
|
||||
**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.
|
||||
|
||||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/netstorage/netstorage.go then run make backenddocs" >}}
|
||||
### Standard options
|
||||
|
||||
Here are the standard options specific to netstorage (Akamai NetStorage).
|
||||
|
||||
#### --netstorage-host
|
||||
|
||||
Domain+path of NetStorage host to connect to.
|
||||
|
||||
Format should be <domain>/<internal folders>
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: host
|
||||
- Env Var: RCLONE_NETSTORAGE_HOST
|
||||
- Type: string
|
||||
- Required: true
|
||||
|
||||
#### --netstorage-account
|
||||
|
||||
Set the NetStorage account name
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: account
|
||||
- Env Var: RCLONE_NETSTORAGE_ACCOUNT
|
||||
- Type: string
|
||||
- Required: true
|
||||
|
||||
#### --netstorage-secret
|
||||
|
||||
Set the NetStorage account secret/G2O key for authentication.
|
||||
|
||||
Please choose the 'y' option to set your own password then enter your secret.
|
||||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: secret
|
||||
- Env Var: RCLONE_NETSTORAGE_SECRET
|
||||
- Type: string
|
||||
- Required: true
|
||||
|
||||
### Advanced options
|
||||
|
||||
Here are the advanced options specific to netstorage (Akamai NetStorage).
|
||||
|
||||
#### --netstorage-protocol
|
||||
|
||||
Select between HTTP or HTTPS protocol.
|
||||
|
||||
Most users should choose HTTPS, which is the default.
|
||||
HTTP is provided primarily for debugging purposes.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: protocol
|
||||
- Env Var: RCLONE_NETSTORAGE_PROTOCOL
|
||||
- Type: string
|
||||
- Default: "https"
|
||||
- Examples:
|
||||
- "http"
|
||||
- HTTP protocol
|
||||
- "https"
|
||||
- HTTPS protocol
|
||||
|
||||
## Backend commands
|
||||
|
||||
Here are the commands specific to the netstorage backend.
|
||||
|
||||
Run them with
|
||||
|
||||
rclone backend COMMAND remote:
|
||||
|
||||
The help below will explain what arguments each command takes.
|
||||
|
||||
See [the "rclone backend" command](/commands/rclone_backend/) for more
|
||||
info on how to pass options and arguments.
|
||||
|
||||
These can be run on a running backend using the rc command
|
||||
[backend/command](/rc/#backend-command).
|
||||
|
||||
### du
|
||||
|
||||
Return disk usage information for a specified directory
|
||||
|
||||
rclone backend du remote: [options] [<arguments>+]
|
||||
|
||||
The usage information returned, includes the targeted directory as well as all
|
||||
files stored in any sub-directories that may exist.
|
||||
|
||||
### symlink
|
||||
|
||||
You can create a symbolic link in ObjectStore with the symlink action.
|
||||
|
||||
rclone backend symlink remote: [options] [<arguments>+]
|
||||
|
||||
The desired path location (including applicable sub-directories) ending in
|
||||
the object that will be the target of the symlink (for example, /links/mylink).
|
||||
Include the file extension for the object, if applicable.
|
||||
rclone backend symlink <src> <path>
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
|
@ -204,10 +204,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_ONEDRIVE_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-client-secret
|
||||
|
||||
|
@ -215,15 +217,19 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-region
|
||||
|
||||
Choose national cloud region for OneDrive.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_ONEDRIVE_REGION
|
||||
- Type: string
|
||||
|
@ -246,10 +252,12 @@ Here are the advanced options specific to onedrive (Microsoft OneDrive).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_ONEDRIVE_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-auth-url
|
||||
|
||||
|
@ -257,10 +265,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_ONEDRIVE_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-token-url
|
||||
|
||||
|
@ -268,10 +278,12 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_ONEDRIVE_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-chunk-size
|
||||
|
||||
|
@ -281,6 +293,8 @@ Above this size files will be chunked - must be multiple of 320k (327,680 bytes)
|
|||
should not exceed 250M (262,144,000 bytes) else you may encounter \"Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big.\"
|
||||
Note that the chunks will be buffered into memory.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -290,30 +304,69 @@ Note that the chunks will be buffered into memory.
|
|||
|
||||
The ID of the drive to use.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: drive_id
|
||||
- Env Var: RCLONE_ONEDRIVE_DRIVE_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-drive-type
|
||||
|
||||
The type of the drive (personal | business | documentLibrary).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: drive_type
|
||||
- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-root-folder-id
|
||||
|
||||
ID of the root folder.
|
||||
|
||||
This isn't normally needed, but in special circumstances you might
|
||||
know the folder ID that you wish to access but not be able to get
|
||||
there through a path traversal.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: root_folder_id
|
||||
- Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-disable-site-permission
|
||||
|
||||
Disable the request for Sites.Read.All permission.
|
||||
|
||||
If set to true, you will no longer be able to search for a SharePoint site when
|
||||
configuring drive ID, because rclone will not request Sites.Read.All permission.
|
||||
Set it to true if your organization didn't assign Sites.Read.All permission to the
|
||||
application, and your organization disallows users to consent app permission
|
||||
request on their own.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_site_permission
|
||||
- Env Var: RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --onedrive-expose-onenote-files
|
||||
|
||||
Set to make OneNote files show up in directory listings.
|
||||
|
||||
By default rclone will hide OneNote files in directory listings because
|
||||
By default, rclone will hide OneNote files in directory listings because
|
||||
operations like "Open" and "Update" won't work on them. But this
|
||||
behaviour may also prevent you from deleting them. If you want to
|
||||
delete OneNote files or otherwise want them to show up in directory
|
||||
listing, set this option.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: expose_onenote_files
|
||||
- Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
|
||||
- Type: bool
|
||||
|
@ -327,6 +380,8 @@ This will only work if you are copying between two OneDrive *Personal* drives AN
|
|||
the files to copy are already shared between them. In other cases, rclone will
|
||||
fall back to normal copy (which will be slightly slower).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: server_side_across_configs
|
||||
- Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS
|
||||
- Type: bool
|
||||
|
@ -336,6 +391,8 @@ fall back to normal copy (which will be slightly slower).
|
|||
|
||||
Size of listing chunk.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: list_chunk
|
||||
- Env Var: RCLONE_ONEDRIVE_LIST_CHUNK
|
||||
- Type: int
|
||||
|
@ -357,6 +414,8 @@ modification time and removes all but the last version.
|
|||
this flag there.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_versions
|
||||
- Env Var: RCLONE_ONEDRIVE_NO_VERSIONS
|
||||
- Type: bool
|
||||
|
@ -366,6 +425,8 @@ this flag there.
|
|||
|
||||
Set the scope of the links created by the link command.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: link_scope
|
||||
- Env Var: RCLONE_ONEDRIVE_LINK_SCOPE
|
||||
- Type: string
|
||||
|
@ -383,6 +444,8 @@ Set the scope of the links created by the link command.
|
|||
|
||||
Set the type of the links created by the link command.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: link_type
|
||||
- Env Var: RCLONE_ONEDRIVE_LINK_TYPE
|
||||
- Type: string
|
||||
|
@ -402,17 +465,21 @@ Set the password for links created by the link command.
|
|||
At the time of writing this only works with OneDrive personal paid accounts.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: link_password
|
||||
- Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --onedrive-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_ONEDRIVE_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -108,10 +108,12 @@ Here are the standard options specific to opendrive (OpenDrive).
|
|||
|
||||
Username.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: username
|
||||
- Env Var: RCLONE_OPENDRIVE_USERNAME
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --opendrive-password
|
||||
|
||||
|
@ -119,10 +121,12 @@ Password.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: password
|
||||
- Env Var: RCLONE_OPENDRIVE_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -130,10 +134,12 @@ Here are the advanced options specific to opendrive (OpenDrive).
|
|||
|
||||
#### --opendrive-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_OPENDRIVE_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
@ -146,6 +152,8 @@ Files will be uploaded in chunks this size.
|
|||
Note that these chunks are buffered in memory so increasing them will
|
||||
increase memory use.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
|
|
@ -145,10 +145,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_PCLOUD_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --pcloud-client-secret
|
||||
|
||||
|
@ -156,10 +158,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_PCLOUD_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -169,10 +173,12 @@ Here are the advanced options specific to pcloud (Pcloud).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_PCLOUD_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --pcloud-auth-url
|
||||
|
||||
|
@ -180,10 +186,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_PCLOUD_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --pcloud-token-url
|
||||
|
||||
|
@ -191,17 +199,21 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_PCLOUD_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --pcloud-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_PCLOUD_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
@ -211,6 +223,8 @@ See the [encoding section in the overview](/overview/#encoding) for more info.
|
|||
|
||||
Fill in for rclone to use a non root folder as its starting point.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: root_folder_id
|
||||
- Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID
|
||||
- Type: string
|
||||
|
@ -225,6 +239,8 @@ however you will need to set it by hand if you are using remote config
|
|||
with rclone authorize.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hostname
|
||||
- Env Var: RCLONE_PCLOUD_HOSTNAME
|
||||
- Type: string
|
||||
|
|
|
@ -113,10 +113,12 @@ API Key.
|
|||
This is not normally used - use oauth instead.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: api_key
|
||||
- Env Var: RCLONE_PREMIUMIZEME_API_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -124,10 +126,12 @@ Here are the advanced options specific to premiumizeme (premiumize.me).
|
|||
|
||||
#### --premiumizeme-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_PREMIUMIZEME_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -115,10 +115,12 @@ Here are the advanced options specific to putio (Put.io).
|
|||
|
||||
#### --putio-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_PUTIO_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -152,6 +152,8 @@ Get QingStor credentials from runtime.
|
|||
|
||||
Only applies if access_key_id and secret_access_key is blank.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: env_auth
|
||||
- Env Var: RCLONE_QINGSTOR_ENV_AUTH
|
||||
- Type: bool
|
||||
|
@ -168,10 +170,12 @@ QingStor Access Key ID.
|
|||
|
||||
Leave blank for anonymous access or runtime credentials.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: access_key_id
|
||||
- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --qingstor-secret-access-key
|
||||
|
||||
|
@ -179,10 +183,12 @@ QingStor Secret Access Key (password).
|
|||
|
||||
Leave blank for anonymous access or runtime credentials.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: secret_access_key
|
||||
- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --qingstor-endpoint
|
||||
|
||||
|
@ -190,10 +196,12 @@ Enter an endpoint URL to connection QingStor API.
|
|||
|
||||
Leave blank will use the default value "https://qingstor.com:443".
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_QINGSTOR_ENDPOINT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --qingstor-zone
|
||||
|
||||
|
@ -201,10 +209,12 @@ Zone to connect to.
|
|||
|
||||
Default is "pek3a".
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: zone
|
||||
- Env Var: RCLONE_QINGSTOR_ZONE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "pek3a"
|
||||
- The Beijing (China) Three Zone.
|
||||
|
@ -224,6 +234,8 @@ Here are the advanced options specific to qingstor (QingCloud Object Storage).
|
|||
|
||||
Number of connection retries.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: connection_retries
|
||||
- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
|
||||
- Type: int
|
||||
|
@ -236,6 +248,8 @@ Cutoff for switching to chunked upload.
|
|||
Any files larger than this will be uploaded in chunks of chunk_size.
|
||||
The minimum is 0 and the maximum is 5 GiB.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
|
@ -254,6 +268,8 @@ in memory per transfer.
|
|||
If you are transferring large files over high-speed links and you have
|
||||
enough memory, then increasing this will speed up the transfers.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -273,6 +289,8 @@ If you are uploading small numbers of large files over high-speed links
|
|||
and these uploads do not fully utilize your bandwidth, then increasing
|
||||
this may help to speed up the transfers.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_concurrency
|
||||
- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
|
||||
- Type: int
|
||||
|
@ -280,10 +298,12 @@ this may help to speed up the transfers.
|
|||
|
||||
#### --qingstor-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_QINGSTOR_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -663,7 +663,7 @@ If the rate parameter is not supplied then the bandwidth is queried
|
|||
The format of the parameter is exactly the same as passed to --bwlimit
|
||||
except only one bandwidth may be specified.
|
||||
|
||||
In either case "rate" is returned as a human readable string, and
|
||||
In either case "rate" is returned as a human-readable string, and
|
||||
"bytesPerSecond" is returned as a number.
|
||||
|
||||
### core/command: Run a rclone terminal command over rc. {#core-command}
|
||||
|
@ -1497,6 +1497,32 @@ check that parameter passing is working properly.
|
|||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### sync/bisync: Perform bidirectonal synchronization between two paths. {#sync-bisync}
|
||||
|
||||
This takes the following parameters
|
||||
|
||||
- path1 - a remote directory string e.g. `drive:path1`
|
||||
- path2 - a remote directory string e.g. `drive:path2`
|
||||
- dryRun - dry-run mode
|
||||
- resync - performs the resync run
|
||||
- checkAccess - abort if RCLONE_TEST files are not found on both filesystems
|
||||
- checkFilename - file name for checkAccess (default: RCLONE_TEST)
|
||||
- maxDelete - abort sync if percentage of deleted files is above
|
||||
this threshold (default: 50)
|
||||
- force - maxDelete safety check and run the sync
|
||||
- checkSync - `true` by default, `false` disables comparison of final listings,
|
||||
`only` will skip sync, only compare listings from the last run
|
||||
- removeEmptyDirs - remove empty directories at the final cleanup step
|
||||
- filtersFile - read filtering patterns from a file
|
||||
- workdir - server directory for history files (default: /home/ncw/.cache/rclone/bisync)
|
||||
- noCleanup - retain working files
|
||||
|
||||
See [bisync command help](https://rclone.org/commands/rclone_bisync/)
|
||||
and [full bisync description](https://rclone.org/bisync/)
|
||||
for more information.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### sync/copy: copy a directory from source remote to destination remote {#sync-copy}
|
||||
|
||||
This takes the following parameters:
|
||||
|
@ -1610,6 +1636,44 @@ starting with dir will refresh that directory, e.g.
|
|||
If the parameter recursive=true is given the whole directory tree
|
||||
will get refreshed. This refresh will use --fast-list if enabled.
|
||||
|
||||
This command takes an "fs" parameter. If this parameter is not
|
||||
supplied and if there is only one VFS in use then that VFS will be
|
||||
used. If there is more than one VFS in use then the "fs" parameter
|
||||
must be supplied.
|
||||
|
||||
### vfs/stats: Stats for a VFS. {#vfs-stats}
|
||||
|
||||
This returns stats for the selected VFS.
|
||||
|
||||
{
|
||||
// Status of the disk cache - only present if --vfs-cache-mode > off
|
||||
"diskCache": {
|
||||
"bytesUsed": 0,
|
||||
"erroredFiles": 0,
|
||||
"files": 0,
|
||||
"hashType": 1,
|
||||
"outOfSpace": false,
|
||||
"path": "/home/user/.cache/rclone/vfs/local/mnt/a",
|
||||
"pathMeta": "/home/user/.cache/rclone/vfsMeta/local/mnt/a",
|
||||
"uploadsInProgress": 0,
|
||||
"uploadsQueued": 0
|
||||
},
|
||||
"fs": "/mnt/a",
|
||||
"inUse": 1,
|
||||
// Status of the in memory metadata cache
|
||||
"metadataCache": {
|
||||
"dirs": 1,
|
||||
"files": 0
|
||||
},
|
||||
// Options as returned by options/get
|
||||
"opt": {
|
||||
"CacheMaxAge": 3600000000000,
|
||||
// ...
|
||||
"WriteWait": 1000000000
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
This command takes an "fs" parameter. If this parameter is not
|
||||
supplied and if there is only one VFS in use then that VFS will be
|
||||
used. If there is more than one VFS in use then the "fs" parameter
|
||||
|
|
|
@ -566,16 +566,18 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
|
|||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
|
||||
### Standard options
|
||||
|
||||
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
|
||||
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
|
||||
|
||||
#### --s3-provider
|
||||
|
||||
Choose your S3 provider.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: provider
|
||||
- Env Var: RCLONE_S3_PROVIDER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "AWS"
|
||||
- Amazon Web Services (AWS) S3
|
||||
|
@ -589,16 +591,22 @@ Choose your S3 provider.
|
|||
- Dreamhost DreamObjects
|
||||
- "IBMCOS"
|
||||
- IBM COS S3
|
||||
- "LyveCloud"
|
||||
- Seagate Lyve Cloud
|
||||
- "Minio"
|
||||
- Minio Object Storage
|
||||
- "Netease"
|
||||
- Netease Object Storage (NOS)
|
||||
- "RackCorp"
|
||||
- RackCorp Object Storage
|
||||
- "Scaleway"
|
||||
- Scaleway Object Storage
|
||||
- "SeaweedFS"
|
||||
- SeaweedFS S3
|
||||
- "StackPath"
|
||||
- StackPath Object Storage
|
||||
- "Storj"
|
||||
- Storj (S3 Compatible Gateway)
|
||||
- "TencentCOS"
|
||||
- Tencent Cloud Object Storage (COS)
|
||||
- "Wasabi"
|
||||
|
@ -612,6 +620,8 @@ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if
|
|||
|
||||
Only applies if access_key_id and secret_access_key is blank.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: env_auth
|
||||
- Env Var: RCLONE_S3_ENV_AUTH
|
||||
- Type: bool
|
||||
|
@ -628,10 +638,12 @@ AWS Access Key ID.
|
|||
|
||||
Leave blank for anonymous access or runtime credentials.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: access_key_id
|
||||
- Env Var: RCLONE_S3_ACCESS_KEY_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --s3-secret-access-key
|
||||
|
||||
|
@ -639,19 +651,24 @@ AWS Secret Access Key (password).
|
|||
|
||||
Leave blank for anonymous access or runtime credentials.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: secret_access_key
|
||||
- Env Var: RCLONE_S3_SECRET_ACCESS_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --s3-region
|
||||
|
||||
Region to connect to.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: AWS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "us-east-1"
|
||||
- The default endpoint - a good choice if you are unsure.
|
||||
|
@ -732,12 +749,67 @@ Region to connect to.
|
|||
|
||||
#### --s3-region
|
||||
|
||||
Region to connect to.
|
||||
region - the location where your bucket will be created and your data stored.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: RackCorp
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "global"
|
||||
- Global CDN (All locations) Region
|
||||
- "au"
|
||||
- Australia (All states)
|
||||
- "au-nsw"
|
||||
- NSW (Australia) Region
|
||||
- "au-qld"
|
||||
- QLD (Australia) Region
|
||||
- "au-vic"
|
||||
- VIC (Australia) Region
|
||||
- "au-wa"
|
||||
- Perth (Australia) Region
|
||||
- "ph"
|
||||
- Manila (Philippines) Region
|
||||
- "th"
|
||||
- Bangkok (Thailand) Region
|
||||
- "hk"
|
||||
- HK (Hong Kong) Region
|
||||
- "mn"
|
||||
- Ulaanbaatar (Mongolia) Region
|
||||
- "kg"
|
||||
- Bishkek (Kyrgyzstan) Region
|
||||
- "id"
|
||||
- Jakarta (Indonesia) Region
|
||||
- "jp"
|
||||
- Tokyo (Japan) Region
|
||||
- "sg"
|
||||
- SG (Singapore) Region
|
||||
- "de"
|
||||
- Frankfurt (Germany) Region
|
||||
- "us"
|
||||
- USA (AnyCast) Region
|
||||
- "us-east-1"
|
||||
- New York (USA) Region
|
||||
- "us-west-1"
|
||||
- Freemont (USA) Region
|
||||
- "nz"
|
||||
- Auckland (New Zealand) Region
|
||||
|
||||
#### --s3-region
|
||||
|
||||
Region to connect to.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: Scaleway
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "nl-ams"
|
||||
- Amsterdam, The Netherlands
|
||||
|
@ -750,10 +822,13 @@ Region to connect to.
|
|||
|
||||
Leave blank if you are using an S3 clone and you don't have a region.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: !AWS,Alibaba,RackCorp,Scaleway,Storj,TencentCOS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Use this if unsure.
|
||||
|
@ -768,10 +843,13 @@ Endpoint for S3 API.
|
|||
|
||||
Leave blank if using AWS to use the default endpoint for the region.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: AWS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
|
@ -779,10 +857,13 @@ Endpoint for IBM COS S3 API.
|
|||
|
||||
Specify if using an IBM COS On Premise.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: IBMCOS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "s3.us.cloud-object-storage.appdomain.cloud"
|
||||
- US Cross Region Endpoint
|
||||
|
@ -913,10 +994,13 @@ Specify if using an IBM COS On Premise.
|
|||
|
||||
Endpoint for OSS API.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: Alibaba
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "oss-accelerate.aliyuncs.com"
|
||||
- Global Accelerate
|
||||
|
@ -973,10 +1057,13 @@ Endpoint for OSS API.
|
|||
|
||||
Endpoint for Scaleway Object Storage.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: Scaleway
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "s3.nl-ams.scw.cloud"
|
||||
- Amsterdam Endpoint
|
||||
|
@ -987,10 +1074,13 @@ Endpoint for Scaleway Object Storage.
|
|||
|
||||
Endpoint for StackPath Object Storage.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: StackPath
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "s3.us-east-2.stackpathstorage.com"
|
||||
- US East Endpoint
|
||||
|
@ -1001,12 +1091,34 @@ Endpoint for StackPath Object Storage.
|
|||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for Tencent COS API.
|
||||
Endpoint of the Shared Gateway.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: Storj
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "gateway.eu1.storjshare.io"
|
||||
- EU1 Shared Gateway
|
||||
- "gateway.us1.storjshare.io"
|
||||
- US1 Shared Gateway
|
||||
- "gateway.ap1.storjshare.io"
|
||||
- Asia-Pacific Shared Gateway
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for Tencent COS API.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: TencentCOS
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "cos.ap-beijing.myqcloud.com"
|
||||
- Beijing Region
|
||||
|
@ -1049,14 +1161,68 @@ Endpoint for Tencent COS API.
|
|||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for RackCorp Object Storage.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: RackCorp
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "s3.rackcorp.com"
|
||||
- Global (AnyCast) Endpoint
|
||||
- "au.s3.rackcorp.com"
|
||||
- Australia (Anycast) Endpoint
|
||||
- "au-nsw.s3.rackcorp.com"
|
||||
- Sydney (Australia) Endpoint
|
||||
- "au-qld.s3.rackcorp.com"
|
||||
- Brisbane (Australia) Endpoint
|
||||
- "au-vic.s3.rackcorp.com"
|
||||
- Melbourne (Australia) Endpoint
|
||||
- "au-wa.s3.rackcorp.com"
|
||||
- Perth (Australia) Endpoint
|
||||
- "ph.s3.rackcorp.com"
|
||||
- Manila (Philippines) Endpoint
|
||||
- "th.s3.rackcorp.com"
|
||||
- Bangkok (Thailand) Endpoint
|
||||
- "hk.s3.rackcorp.com"
|
||||
- HK (Hong Kong) Endpoint
|
||||
- "mn.s3.rackcorp.com"
|
||||
- Ulaanbaatar (Mongolia) Endpoint
|
||||
- "kg.s3.rackcorp.com"
|
||||
- Bishkek (Kyrgyzstan) Endpoint
|
||||
- "id.s3.rackcorp.com"
|
||||
- Jakarta (Indonesia) Endpoint
|
||||
- "jp.s3.rackcorp.com"
|
||||
- Tokyo (Japan) Endpoint
|
||||
- "sg.s3.rackcorp.com"
|
||||
- SG (Singapore) Endpoint
|
||||
- "de.s3.rackcorp.com"
|
||||
- Frankfurt (Germany) Endpoint
|
||||
- "us.s3.rackcorp.com"
|
||||
- USA (AnyCast) Endpoint
|
||||
- "us-east-1.s3.rackcorp.com"
|
||||
- New York (USA) Endpoint
|
||||
- "us-west-1.s3.rackcorp.com"
|
||||
- Freemont (USA) Endpoint
|
||||
- "nz.s3.rackcorp.com"
|
||||
- Auckland (New Zealand) Endpoint
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for S3 API.
|
||||
|
||||
Required when using an S3 clone.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: !AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,Storj,RackCorp
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "objects-us-east-1.dream.io"
|
||||
- Dream Objects endpoint
|
||||
|
@ -1068,6 +1234,12 @@ Required when using an S3 clone.
|
|||
- Digital Ocean Spaces Singapore 1
|
||||
- "localhost:8333"
|
||||
- SeaweedFS S3 localhost
|
||||
- "s3.us-east-1.lyvecloud.seagate.com"
|
||||
- Seagate Lyve Cloud US East 1 (Virginia)
|
||||
- "s3.us-west-1.lyvecloud.seagate.com"
|
||||
- Seagate Lyve Cloud US West 1 (California)
|
||||
- "s3.ap-southeast-1.lyvecloud.seagate.com"
|
||||
- Seagate Lyve Cloud AP Southeast 1 (Singapore)
|
||||
- "s3.wasabisys.com"
|
||||
- Wasabi US East endpoint
|
||||
- "s3.us-west-1.wasabisys.com"
|
||||
|
@ -1085,10 +1257,13 @@ Location constraint - must be set to match the Region.
|
|||
|
||||
Used when creating buckets only.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: location_constraint
|
||||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||||
- Provider: AWS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Empty for US Region, Northern Virginia, or Pacific Northwest
|
||||
|
@ -1147,10 +1322,13 @@ Location constraint - must match endpoint when using IBM Cloud Public.
|
|||
|
||||
For on-prem COS, do not make a selection from this list, hit enter.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: location_constraint
|
||||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||||
- Provider: IBMCOS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "us-standard"
|
||||
- US Cross Region Standard
|
||||
|
@ -1219,14 +1397,69 @@ For on-prem COS, do not make a selection from this list, hit enter.
|
|||
|
||||
#### --s3-location-constraint
|
||||
|
||||
Location constraint - the location where your bucket will be located and your data stored.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: location_constraint
|
||||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||||
- Provider: RackCorp
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "global"
|
||||
- Global CDN Region
|
||||
- "au"
|
||||
- Australia (All locations)
|
||||
- "au-nsw"
|
||||
- NSW (Australia) Region
|
||||
- "au-qld"
|
||||
- QLD (Australia) Region
|
||||
- "au-vic"
|
||||
- VIC (Australia) Region
|
||||
- "au-wa"
|
||||
- Perth (Australia) Region
|
||||
- "ph"
|
||||
- Manila (Philippines) Region
|
||||
- "th"
|
||||
- Bangkok (Thailand) Region
|
||||
- "hk"
|
||||
- HK (Hong Kong) Region
|
||||
- "mn"
|
||||
- Ulaanbaatar (Mongolia) Region
|
||||
- "kg"
|
||||
- Bishkek (Kyrgyzstan) Region
|
||||
- "id"
|
||||
- Jakarta (Indonesia) Region
|
||||
- "jp"
|
||||
- Tokyo (Japan) Region
|
||||
- "sg"
|
||||
- SG (Singapore) Region
|
||||
- "de"
|
||||
- Frankfurt (Germany) Region
|
||||
- "us"
|
||||
- USA (AnyCast) Region
|
||||
- "us-east-1"
|
||||
- New York (USA) Region
|
||||
- "us-west-1"
|
||||
- Freemont (USA) Region
|
||||
- "nz"
|
||||
- Auckland (New Zealand) Region
|
||||
|
||||
#### --s3-location-constraint
|
||||
|
||||
Location constraint - must be set to match the Region.
|
||||
|
||||
Leave blank if not sure. Used when creating buckets only.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: location_constraint
|
||||
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
|
||||
- Provider: !AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,Storj,TencentCOS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --s3-acl
|
||||
|
||||
|
@ -1239,10 +1472,13 @@ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview
|
|||
Note that this ACL is applied when server-side copying objects as S3
|
||||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: acl
|
||||
- Env Var: RCLONE_S3_ACL
|
||||
- Provider: !Storj
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "default"
|
||||
- Owner gets Full_CONTROL.
|
||||
|
@ -1289,10 +1525,13 @@ doesn't copy the ACL from the source but rather writes a fresh one.
|
|||
|
||||
The server-side encryption algorithm used when storing this object in S3.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: server_side_encryption
|
||||
- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
|
||||
- Provider: AWS,Ceph,Minio
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- None
|
||||
|
@ -1305,10 +1544,13 @@ The server-side encryption algorithm used when storing this object in S3.
|
|||
|
||||
If using KMS ID you must provide the ARN of Key.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: sse_kms_key_id
|
||||
- Env Var: RCLONE_S3_SSE_KMS_KEY_ID
|
||||
- Provider: AWS,Ceph,Minio
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- None
|
||||
|
@ -1319,10 +1561,13 @@ If using KMS ID you must provide the ARN of Key.
|
|||
|
||||
The storage class to use when storing new objects in S3.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||
- Provider: AWS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Default
|
||||
|
@ -1347,10 +1592,13 @@ The storage class to use when storing new objects in S3.
|
|||
|
||||
The storage class to use when storing new objects in OSS.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||
- Provider: Alibaba
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Default
|
||||
|
@ -1365,10 +1613,13 @@ The storage class to use when storing new objects in OSS.
|
|||
|
||||
The storage class to use when storing new objects in Tencent COS.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||
- Provider: TencentCOS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Default
|
||||
|
@ -1383,10 +1634,13 @@ The storage class to use when storing new objects in Tencent COS.
|
|||
|
||||
The storage class to use when storing new objects in S3.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||
- Provider: Scaleway
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Default.
|
||||
|
@ -1399,7 +1653,7 @@ The storage class to use when storing new objects in S3.
|
|||
|
||||
### Advanced options
|
||||
|
||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
|
||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
|
||||
|
||||
#### --s3-bucket-acl
|
||||
|
||||
|
@ -1410,10 +1664,12 @@ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview
|
|||
Note that this ACL is applied when only when creating buckets. If it
|
||||
isn't set then "acl" is used instead.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: bucket_acl
|
||||
- Env Var: RCLONE_S3_BUCKET_ACL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "private"
|
||||
- Owner gets FULL_CONTROL.
|
||||
|
@ -1433,8 +1689,11 @@ isn't set then "acl" is used instead.
|
|||
|
||||
Enables requester pays option when interacting with S3 bucket.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: requester_pays
|
||||
- Env Var: RCLONE_S3_REQUESTER_PAYS
|
||||
- Provider: AWS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
|
@ -1442,10 +1701,13 @@ Enables requester pays option when interacting with S3 bucket.
|
|||
|
||||
If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: sse_customer_algorithm
|
||||
- Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM
|
||||
- Provider: AWS,Ceph,Minio
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- None
|
||||
|
@ -1456,10 +1718,13 @@ If using SSE-C, the server-side encryption algorithm used when storing this obje
|
|||
|
||||
If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: sse_customer_key
|
||||
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY
|
||||
- Provider: AWS,Ceph,Minio
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- None
|
||||
|
@ -1471,10 +1736,13 @@ If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
|
|||
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: sse_customer_key_md5
|
||||
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5
|
||||
- Provider: AWS,Ceph,Minio
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- None
|
||||
|
@ -1486,6 +1754,8 @@ Cutoff for switching to chunked upload.
|
|||
Any files larger than this will be uploaded in chunks of chunk_size.
|
||||
The minimum is 0 and the maximum is 5 GiB.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
|
@ -1515,6 +1785,8 @@ most 10,000 chunks, this means that by default the maximum size of
|
|||
a file you can stream upload is 48 GiB. If you wish to stream upload
|
||||
larger files then you will need to increase chunk_size.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_S3_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -1534,6 +1806,8 @@ Rclone will automatically increase the chunk size when uploading a
|
|||
large file of a known size to stay below this number of chunks limit.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: max_upload_parts
|
||||
- Env Var: RCLONE_S3_MAX_UPLOAD_PARTS
|
||||
- Type: int
|
||||
|
@ -1548,6 +1822,8 @@ copied in chunks of this size.
|
|||
|
||||
The minimum is 0 and the maximum is 5 GiB.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: copy_cutoff
|
||||
- Env Var: RCLONE_S3_COPY_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
|
@ -1562,6 +1838,8 @@ uploading it so it can add it to metadata on the object. This is great
|
|||
for data integrity checking but can cause long delays for large files
|
||||
to start uploading.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_checksum
|
||||
- Env Var: RCLONE_S3_DISABLE_CHECKSUM
|
||||
- Type: bool
|
||||
|
@ -1581,10 +1859,12 @@ it will default to the current user's home directory.
|
|||
Windows: "%USERPROFILE%\.aws\credentials"
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: shared_credentials_file
|
||||
- Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --s3-profile
|
||||
|
||||
|
@ -1597,19 +1877,23 @@ If empty it will default to the environment variable "AWS_PROFILE" or
|
|||
"default" if that environment variable is also not set.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: profile
|
||||
- Env Var: RCLONE_S3_PROFILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --s3-session-token
|
||||
|
||||
An AWS session token.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: session_token
|
||||
- Env Var: RCLONE_S3_SESSION_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --s3-upload-concurrency
|
||||
|
||||
|
@ -1622,6 +1906,8 @@ If you are uploading small numbers of large files over high-speed links
|
|||
and these uploads do not fully utilize your bandwidth, then increasing
|
||||
this may help to speed up the transfers.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_concurrency
|
||||
- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
|
||||
- Type: int
|
||||
|
@ -1640,6 +1926,8 @@ Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this
|
|||
false - rclone will do this automatically based on the provider
|
||||
setting.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: force_path_style
|
||||
- Env Var: RCLONE_S3_FORCE_PATH_STYLE
|
||||
- Type: bool
|
||||
|
@ -1654,6 +1942,8 @@ If it is set then rclone will use v2 authentication.
|
|||
|
||||
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: v2_auth
|
||||
- Env Var: RCLONE_S3_V2_AUTH
|
||||
- Type: bool
|
||||
|
@ -1665,8 +1955,11 @@ If true use the AWS S3 accelerated endpoint.
|
|||
|
||||
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_accelerate_endpoint
|
||||
- Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT
|
||||
- Provider: AWS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
|
@ -1679,8 +1972,11 @@ It should be set to true for resuming uploads across different sessions.
|
|||
WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: leave_parts_on_error
|
||||
- Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR
|
||||
- Provider: AWS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
|
@ -1694,11 +1990,53 @@ In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://d
|
|||
In Ceph, this can be increased with the "rgw list buckets max chunk" option.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: list_chunk
|
||||
- Env Var: RCLONE_S3_LIST_CHUNK
|
||||
- Type: int
|
||||
- Default: 1000
|
||||
|
||||
#### --s3-list-version
|
||||
|
||||
Version of ListObjects to use: 1,2 or 0 for auto.
|
||||
|
||||
When S3 originally launched it only provided the ListObjects call to
|
||||
enumerate objects in a bucket.
|
||||
|
||||
However in May 2016 the ListObjectsV2 call was introduced. This is
|
||||
much higher performance and should be used if at all possible.
|
||||
|
||||
If set to the default, 0, rclone will guess according to the provider
|
||||
set which list objects method to call. If it guesses wrong, then it
|
||||
may be set manually here.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: list_version
|
||||
- Env Var: RCLONE_S3_LIST_VERSION
|
||||
- Type: int
|
||||
- Default: 0
|
||||
|
||||
#### --s3-list-url-encode
|
||||
|
||||
Whether to url encode listings: true/false/unset
|
||||
|
||||
Some providers support URL encoding listings and where this is
|
||||
available this is more reliable when using control characters in file
|
||||
names. If this is set to unset (the default) then rclone will choose
|
||||
according to the provider setting what to apply, but you can override
|
||||
rclone's choice here.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: list_url_encode
|
||||
- Env Var: RCLONE_S3_LIST_URL_ENCODE
|
||||
- Type: Tristate
|
||||
- Default: unset
|
||||
|
||||
#### --s3-no-check-bucket
|
||||
|
||||
If set, don't attempt to check the bucket exists or create it.
|
||||
|
@ -1711,6 +2049,8 @@ creation permissions. Before v1.52.0 this would have passed silently
|
|||
due to a bug.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_check_bucket
|
||||
- Env Var: RCLONE_S3_NO_CHECK_BUCKET
|
||||
- Type: bool
|
||||
|
@ -1748,6 +2088,8 @@ operation. In practice the chance of an undetected upload failure is
|
|||
very small even with this flag.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_head
|
||||
- Env Var: RCLONE_S3_NO_HEAD
|
||||
- Type: bool
|
||||
|
@ -1757,6 +2099,8 @@ very small even with this flag.
|
|||
|
||||
If set, do not do HEAD before GET when getting objects.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_head_object
|
||||
- Env Var: RCLONE_S3_NO_HEAD_OBJECT
|
||||
- Type: bool
|
||||
|
@ -1764,10 +2108,12 @@ If set, do not do HEAD before GET when getting objects.
|
|||
|
||||
#### --s3-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_S3_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
@ -1780,6 +2126,8 @@ How often internal memory buffer pools will be flushed.
|
|||
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
|
||||
This option controls how often unused buffers will be removed from the pool.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: memory_pool_flush_time
|
||||
- Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME
|
||||
- Type: Duration
|
||||
|
@ -1789,6 +2137,8 @@ This option controls how often unused buffers will be removed from the pool.
|
|||
|
||||
Whether to use mmap buffers in internal memory pool.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: memory_pool_use_mmap
|
||||
- Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP
|
||||
- Type: bool
|
||||
|
@ -1806,6 +2156,8 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
|
|||
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_http2
|
||||
- Env Var: RCLONE_S3_DISABLE_HTTP2
|
||||
- Type: bool
|
||||
|
@ -1817,10 +2169,26 @@ Custom endpoint for downloads.
|
|||
This is usually set to a CloudFront CDN URL as AWS S3 offers
|
||||
cheaper egress for data downloaded through the CloudFront network.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: download_url
|
||||
- Env Var: RCLONE_S3_DOWNLOAD_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --s3-use-multipart-etag
|
||||
|
||||
Whether to use ETag in multipart uploads for verification
|
||||
|
||||
This should be true, false or left unset to use the default for the provider.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_multipart_etag
|
||||
- Env Var: RCLONE_S3_USE_MULTIPART_ETAG
|
||||
- Type: Tristate
|
||||
- Default: unset
|
||||
|
||||
## Backend commands
|
||||
|
||||
|
@ -1836,7 +2204,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
|
|||
info on how to pass options and arguments.
|
||||
|
||||
These can be run on a running backend using the rc command
|
||||
[backend/command](/rc/#backend/command).
|
||||
[backend/command](/rc/#backend-command).
|
||||
|
||||
### restore
|
||||
|
||||
|
|
|
@ -272,10 +272,12 @@ Here are the standard options specific to seafile (seafile).
|
|||
|
||||
URL of seafile host to connect to.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: url
|
||||
- Env Var: RCLONE_SEAFILE_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
- Examples:
|
||||
- "https://cloud.seafile.com/"
|
||||
- Connect to cloud.seafile.com.
|
||||
|
@ -284,10 +286,12 @@ URL of seafile host to connect to.
|
|||
|
||||
User name (usually email address).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user
|
||||
- Env Var: RCLONE_SEAFILE_USER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --seafile-pass
|
||||
|
||||
|
@ -295,15 +299,19 @@ Password.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pass
|
||||
- Env Var: RCLONE_SEAFILE_PASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --seafile-2fa
|
||||
|
||||
Two-factor authentication ('true' if the account has 2FA enabled).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: 2fa
|
||||
- Env Var: RCLONE_SEAFILE_2FA
|
||||
- Type: bool
|
||||
|
@ -315,10 +323,12 @@ Name of the library.
|
|||
|
||||
Leave blank to access all non-encrypted libraries.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: library
|
||||
- Env Var: RCLONE_SEAFILE_LIBRARY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --seafile-library-key
|
||||
|
||||
|
@ -328,19 +338,23 @@ Leave blank if you pass it through the command line.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: library_key
|
||||
- Env Var: RCLONE_SEAFILE_LIBRARY_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --seafile-auth-token
|
||||
|
||||
Authentication token.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_token
|
||||
- Env Var: RCLONE_SEAFILE_AUTH_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -350,6 +364,8 @@ Here are the advanced options specific to seafile (seafile).
|
|||
|
||||
Should rclone create a library if it doesn't exist.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: create_library
|
||||
- Env Var: RCLONE_SEAFILE_CREATE_LIBRARY
|
||||
- Type: bool
|
||||
|
@ -357,10 +373,12 @@ Should rclone create a library if it doesn't exist.
|
|||
|
||||
#### --seafile-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_SEAFILE_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -267,28 +267,34 @@ SSH host to connect to.
|
|||
|
||||
E.g. "example.com".
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: host
|
||||
- Env Var: RCLONE_SFTP_HOST
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --sftp-user
|
||||
|
||||
SSH username, leave blank for current username, $USER.
|
||||
SSH username.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user
|
||||
- Env Var: RCLONE_SFTP_USER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Default: "$USER"
|
||||
|
||||
#### --sftp-port
|
||||
|
||||
SSH port, leave blank to use default (22).
|
||||
SSH port number.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: port
|
||||
- Env Var: RCLONE_SFTP_PORT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Type: int
|
||||
- Default: 22
|
||||
|
||||
#### --sftp-pass
|
||||
|
||||
|
@ -296,10 +302,12 @@ SSH password, leave blank to use ssh-agent.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pass
|
||||
- Env Var: RCLONE_SFTP_PASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-key-pem
|
||||
|
||||
|
@ -307,10 +315,12 @@ Raw PEM-encoded private key.
|
|||
|
||||
If specified, will override key_file parameter.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: key_pem
|
||||
- Env Var: RCLONE_SFTP_KEY_PEM
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-key-file
|
||||
|
||||
|
@ -320,10 +330,12 @@ Leave blank or set key-use-agent to use ssh-agent.
|
|||
|
||||
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: key_file
|
||||
- Env Var: RCLONE_SFTP_KEY_FILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-key-file-pass
|
||||
|
||||
|
@ -334,10 +346,12 @@ in the new OpenSSH format can't be used.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: key_file_pass
|
||||
- Env Var: RCLONE_SFTP_KEY_FILE_PASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-pubkey-file
|
||||
|
||||
|
@ -347,10 +361,12 @@ Set this if you have a signed certificate you want to use for authentication.
|
|||
|
||||
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pubkey_file
|
||||
- Env Var: RCLONE_SFTP_PUBKEY_FILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-key-use-agent
|
||||
|
||||
|
@ -360,6 +376,8 @@ When key-file is also set, the ".pub" file of the specified key-file is read and
|
|||
requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
|
||||
when the ssh-agent contains many keys.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: key_use_agent
|
||||
- Env Var: RCLONE_SFTP_KEY_USE_AGENT
|
||||
- Type: bool
|
||||
|
@ -380,6 +398,8 @@ This enables the use of the following insecure ciphers and key exchange methods:
|
|||
|
||||
Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_insecure_cipher
|
||||
- Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
|
||||
- Type: bool
|
||||
|
@ -396,6 +416,8 @@ Disable the execution of SSH commands to determine if remote file hashing is ava
|
|||
|
||||
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_hashcheck
|
||||
- Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
|
||||
- Type: bool
|
||||
|
@ -413,10 +435,12 @@ Set this value to enable server host key validation.
|
|||
|
||||
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: known_hosts_file
|
||||
- Env Var: RCLONE_SFTP_KNOWN_HOSTS_FILE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "~/.ssh/known_hosts"
|
||||
- Use OpenSSH's known_hosts file.
|
||||
|
@ -430,6 +454,8 @@ If this is set and no password is supplied then rclone will:
|
|||
- not contact the ssh agent
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: ask_password
|
||||
- Env Var: RCLONE_SFTP_ASK_PASSWORD
|
||||
- Type: bool
|
||||
|
@ -444,21 +470,25 @@ different. This issue affects among others Synology NAS boxes.
|
|||
|
||||
Shared folders can be found in directories representing volumes
|
||||
|
||||
rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
|
||||
rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory
|
||||
|
||||
Home directory can be found in a shared folder called "home"
|
||||
|
||||
rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
|
||||
rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: path_override
|
||||
- Env Var: RCLONE_SFTP_PATH_OVERRIDE
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-set-modtime
|
||||
|
||||
Set the modified time on the remote if set.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: set_modtime
|
||||
- Env Var: RCLONE_SFTP_SET_MODTIME
|
||||
- Type: bool
|
||||
|
@ -470,10 +500,12 @@ The command used to read md5 hashes.
|
|||
|
||||
Leave blank for autodetect.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: md5sum_command
|
||||
- Env Var: RCLONE_SFTP_MD5SUM_COMMAND
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-sha1sum-command
|
||||
|
||||
|
@ -481,15 +513,19 @@ The command used to read sha1 hashes.
|
|||
|
||||
Leave blank for autodetect.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: sha1sum_command
|
||||
- Env Var: RCLONE_SFTP_SHA1SUM_COMMAND
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-skip-links
|
||||
|
||||
Set to skip any symlinks and any other non regular files.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: skip_links
|
||||
- Env Var: RCLONE_SFTP_SKIP_LINKS
|
||||
- Type: bool
|
||||
|
@ -499,6 +535,8 @@ Set to skip any symlinks and any other non regular files.
|
|||
|
||||
Specifies the SSH2 subsystem on the remote host.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: subsystem
|
||||
- Env Var: RCLONE_SFTP_SUBSYSTEM
|
||||
- Type: string
|
||||
|
@ -510,10 +548,12 @@ Specifies the path or command to run a sftp server on the remote host.
|
|||
|
||||
The subsystem option is ignored when server_command is defined.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: server_command
|
||||
- Env Var: RCLONE_SFTP_SERVER_COMMAND
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sftp-use-fstat
|
||||
|
||||
|
@ -528,6 +568,8 @@ It has been found that this helps with IBM Sterling SFTP servers which have
|
|||
any given time.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: use_fstat
|
||||
- Env Var: RCLONE_SFTP_USE_FSTAT
|
||||
- Type: bool
|
||||
|
@ -551,6 +593,8 @@ Then you may need to enable this flag.
|
|||
If concurrent reads are disabled, the use_fstat option is ignored.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_concurrent_reads
|
||||
- Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_READS
|
||||
- Type: bool
|
||||
|
@ -566,6 +610,8 @@ the performance greatly, especially for distant servers.
|
|||
This option disables concurrent writes should that be necessary.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_concurrent_writes
|
||||
- Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_WRITES
|
||||
- Type: bool
|
||||
|
@ -581,6 +627,8 @@ given, rclone will empty the connection pool.
|
|||
Set to 0 to keep connections indefinitely.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: idle_timeout
|
||||
- Env Var: RCLONE_SFTP_IDLE_TIMEOUT
|
||||
- Type: Duration
|
||||
|
|
|
@ -159,10 +159,12 @@ ID of the root folder.
|
|||
Leave blank to access "Personal Folders". You can use one of the
|
||||
standard values here or any folder ID (long hex number ID).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: root_folder_id
|
||||
- Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Access the Personal Folders (default).
|
||||
|
@ -183,6 +185,8 @@ Here are the advanced options specific to sharefile (Citrix Sharefile).
|
|||
|
||||
Cutoff for switching to multipart upload.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
|
@ -199,6 +203,8 @@ is buffered in memory one per transfer.
|
|||
|
||||
Reducing this will reduce memory usage but decrease performance.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -212,17 +218,21 @@ This is usually auto discovered as part of the oauth process, but can
|
|||
be set manually to something like: https://XXX.sharefile.com
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_SHAREFILE_ENDPOINT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sharefile-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_SHAREFILE_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -141,6 +141,8 @@ Sia daemon API URL, like http://sia.daemon.host:9980.
|
|||
Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
|
||||
Keep default if Sia daemon runs on localhost.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: api_url
|
||||
- Env Var: RCLONE_SIA_API_URL
|
||||
- Type: string
|
||||
|
@ -154,10 +156,12 @@ Can be found in the apipassword file located in HOME/.sia/ or in the daemon dire
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: api_password
|
||||
- Env Var: RCLONE_SIA_API_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -169,6 +173,8 @@ Siad User Agent
|
|||
|
||||
Sia daemon requires the 'Sia-Agent' user agent by default for security
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user_agent
|
||||
- Env Var: RCLONE_SIA_USER_AGENT
|
||||
- Type: string
|
||||
|
@ -176,10 +182,12 @@ Sia daemon requires the 'Sia-Agent' user agent by default for security
|
|||
|
||||
#### --sia-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_SIA_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -221,6 +221,8 @@ Here are the standard options specific to storj (Storj Decentralized Cloud Stora
|
|||
|
||||
Choose an authentication method.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: provider
|
||||
- Env Var: RCLONE_STORJ_PROVIDER
|
||||
- Type: string
|
||||
|
@ -235,10 +237,13 @@ Choose an authentication method.
|
|||
|
||||
Access grant.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: access_grant
|
||||
- Env Var: RCLONE_STORJ_ACCESS_GRANT
|
||||
- Provider: existing
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --storj-satellite-address
|
||||
|
||||
|
@ -246,8 +251,11 @@ Satellite address.
|
|||
|
||||
Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: satellite_address
|
||||
- Env Var: RCLONE_STORJ_SATELLITE_ADDRESS
|
||||
- Provider: new
|
||||
- Type: string
|
||||
- Default: "us-central-1.storj.io"
|
||||
- Examples:
|
||||
|
@ -262,10 +270,13 @@ Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
|
|||
|
||||
API key.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: api_key
|
||||
- Env Var: RCLONE_STORJ_API_KEY
|
||||
- Provider: new
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --storj-passphrase
|
||||
|
||||
|
@ -273,10 +284,13 @@ Encryption passphrase.
|
|||
|
||||
To access existing objects enter passphrase used for uploading.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: passphrase
|
||||
- Env Var: RCLONE_STORJ_PASSPHRASE
|
||||
- Provider: new
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
|
||||
|
|
|
@ -131,10 +131,12 @@ Sugarsync App ID.
|
|||
|
||||
Leave blank to use rclone's.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: app_id
|
||||
- Env Var: RCLONE_SUGARSYNC_APP_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-access-key-id
|
||||
|
||||
|
@ -142,10 +144,12 @@ Sugarsync Access Key ID.
|
|||
|
||||
Leave blank to use rclone's.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: access_key_id
|
||||
- Env Var: RCLONE_SUGARSYNC_ACCESS_KEY_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-private-access-key
|
||||
|
||||
|
@ -153,16 +157,20 @@ Sugarsync Private Access Key.
|
|||
|
||||
Leave blank to use rclone's.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: private_access_key
|
||||
- Env Var: RCLONE_SUGARSYNC_PRIVATE_ACCESS_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-hard-delete
|
||||
|
||||
Permanently delete files if true
|
||||
otherwise put them in the deleted files.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hard_delete
|
||||
- Env Var: RCLONE_SUGARSYNC_HARD_DELETE
|
||||
- Type: bool
|
||||
|
@ -178,10 +186,12 @@ Sugarsync refresh token.
|
|||
|
||||
Leave blank normally, will be auto configured by rclone.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: refresh_token
|
||||
- Env Var: RCLONE_SUGARSYNC_REFRESH_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-authorization
|
||||
|
||||
|
@ -189,10 +199,12 @@ Sugarsync authorization.
|
|||
|
||||
Leave blank normally, will be auto configured by rclone.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: authorization
|
||||
- Env Var: RCLONE_SUGARSYNC_AUTHORIZATION
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-authorization-expiry
|
||||
|
||||
|
@ -200,10 +212,12 @@ Sugarsync authorization expiry.
|
|||
|
||||
Leave blank normally, will be auto configured by rclone.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: authorization_expiry
|
||||
- Env Var: RCLONE_SUGARSYNC_AUTHORIZATION_EXPIRY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-user
|
||||
|
||||
|
@ -211,10 +225,12 @@ Sugarsync user.
|
|||
|
||||
Leave blank normally, will be auto configured by rclone.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user
|
||||
- Env Var: RCLONE_SUGARSYNC_USER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-root-id
|
||||
|
||||
|
@ -222,10 +238,12 @@ Sugarsync root id.
|
|||
|
||||
Leave blank normally, will be auto configured by rclone.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: root_id
|
||||
- Env Var: RCLONE_SUGARSYNC_ROOT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-deleted-id
|
||||
|
||||
|
@ -233,17 +251,21 @@ Sugarsync deleted folder id.
|
|||
|
||||
Leave blank normally, will be auto configured by rclone.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: deleted_id
|
||||
- Env Var: RCLONE_SUGARSYNC_DELETED_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --sugarsync-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_SUGARSYNC_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -251,6 +251,8 @@ Here are the standard options specific to swift (OpenStack Swift (Rackspace Clou
|
|||
|
||||
Get swift credentials from environment variables in standard OpenStack form.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: env_auth
|
||||
- Env Var: RCLONE_SWIFT_ENV_AUTH
|
||||
- Type: bool
|
||||
|
@ -266,28 +268,34 @@ Get swift credentials from environment variables in standard OpenStack form.
|
|||
|
||||
User name to log in (OS_USERNAME).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user
|
||||
- Env Var: RCLONE_SWIFT_USER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-key
|
||||
|
||||
API key or password (OS_PASSWORD).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: key
|
||||
- Env Var: RCLONE_SWIFT_KEY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-auth
|
||||
|
||||
Authentication URL for server (OS_AUTH_URL).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth
|
||||
- Env Var: RCLONE_SWIFT_AUTH
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "https://auth.api.rackspacecloud.com/v1.0"
|
||||
- Rackspace US
|
||||
|
@ -306,105 +314,129 @@ Authentication URL for server (OS_AUTH_URL).
|
|||
|
||||
User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user_id
|
||||
- Env Var: RCLONE_SWIFT_USER_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-domain
|
||||
|
||||
User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: domain
|
||||
- Env Var: RCLONE_SWIFT_DOMAIN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-tenant
|
||||
|
||||
Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: tenant
|
||||
- Env Var: RCLONE_SWIFT_TENANT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-tenant-id
|
||||
|
||||
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: tenant_id
|
||||
- Env Var: RCLONE_SWIFT_TENANT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-tenant-domain
|
||||
|
||||
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: tenant_domain
|
||||
- Env Var: RCLONE_SWIFT_TENANT_DOMAIN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-region
|
||||
|
||||
Region name - optional (OS_REGION_NAME).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_SWIFT_REGION
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-storage-url
|
||||
|
||||
Storage URL - optional (OS_STORAGE_URL).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: storage_url
|
||||
- Env Var: RCLONE_SWIFT_STORAGE_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-auth-token
|
||||
|
||||
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_token
|
||||
- Env Var: RCLONE_SWIFT_AUTH_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-application-credential-id
|
||||
|
||||
Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: application_credential_id
|
||||
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-application-credential-name
|
||||
|
||||
Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: application_credential_name
|
||||
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-application-credential-secret
|
||||
|
||||
Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: application_credential_secret
|
||||
- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --swift-auth-version
|
||||
|
||||
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_version
|
||||
- Env Var: RCLONE_SWIFT_AUTH_VERSION
|
||||
- Type: int
|
||||
|
@ -414,6 +446,8 @@ AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH
|
|||
|
||||
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: endpoint_type
|
||||
- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
|
||||
- Type: string
|
||||
|
@ -435,10 +469,12 @@ container. The policy cannot be changed afterwards. The allowed
|
|||
configuration values and their meaning depend on your Swift storage
|
||||
provider.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: storage_policy
|
||||
- Env Var: RCLONE_SWIFT_STORAGE_POLICY
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- ""
|
||||
- Default
|
||||
|
@ -457,6 +493,8 @@ If true avoid calling abort upload on a failure.
|
|||
|
||||
It should be set to true for resuming uploads across different sessions.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: leave_parts_on_error
|
||||
- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR
|
||||
- Type: bool
|
||||
|
@ -469,6 +507,8 @@ Above this size files will be chunked into a _segments container.
|
|||
Above this size files will be chunked into a _segments container. The
|
||||
default for this is 5 GiB which is its maximum value.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_SWIFT_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
|
@ -487,6 +527,8 @@ files are easier to deal with and have an MD5SUM.
|
|||
Rclone will still chunk files bigger than chunk_size when doing normal
|
||||
copy operations.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: no_chunk
|
||||
- Env Var: RCLONE_SWIFT_NO_CHUNK
|
||||
- Type: bool
|
||||
|
@ -494,10 +536,12 @@ copy operations.
|
|||
|
||||
#### --swift-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_SWIFT_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -182,15 +182,19 @@ List of space separated upstreams.
|
|||
|
||||
Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upstreams
|
||||
- Env Var: RCLONE_UNION_UPSTREAMS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --union-action-policy
|
||||
|
||||
Policy to choose upstream on ACTION category.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: action_policy
|
||||
- Env Var: RCLONE_UNION_ACTION_POLICY
|
||||
- Type: string
|
||||
|
@ -200,6 +204,8 @@ Policy to choose upstream on ACTION category.
|
|||
|
||||
Policy to choose upstream on CREATE category.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: create_policy
|
||||
- Env Var: RCLONE_UNION_CREATE_POLICY
|
||||
- Type: string
|
||||
|
@ -209,6 +215,8 @@ Policy to choose upstream on CREATE category.
|
|||
|
||||
Policy to choose upstream on SEARCH category.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: search_policy
|
||||
- Env Var: RCLONE_UNION_SEARCH_POLICY
|
||||
- Type: string
|
||||
|
@ -220,6 +228,8 @@ Cache time of usage and free space (in seconds).
|
|||
|
||||
This option is only useful when a path preserving policy is used.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: cache_time
|
||||
- Env Var: RCLONE_UNION_CACHE_TIME
|
||||
- Type: int
|
||||
|
|
|
@ -109,10 +109,12 @@ Your access token.
|
|||
|
||||
Get it from https://uptobox.com/my_account.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: access_token
|
||||
- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -120,10 +122,12 @@ Here are the advanced options specific to uptobox (Uptobox).
|
|||
|
||||
#### --uptobox-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_UPTOBOX_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -118,19 +118,23 @@ URL of http host to connect to.
|
|||
|
||||
E.g. https://example.com.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: url
|
||||
- Env Var: RCLONE_WEBDAV_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: true
|
||||
|
||||
#### --webdav-vendor
|
||||
|
||||
Name of the Webdav site/service/software you are using.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: vendor
|
||||
- Env Var: RCLONE_WEBDAV_VENDOR
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "nextcloud"
|
||||
- Nextcloud
|
||||
|
@ -149,10 +153,12 @@ User name.
|
|||
|
||||
In case NTLM authentication is used, the username should be in the format 'Domain\User'.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: user
|
||||
- Env Var: RCLONE_WEBDAV_USER
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --webdav-pass
|
||||
|
||||
|
@ -160,19 +166,23 @@ Password.
|
|||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: pass
|
||||
- Env Var: RCLONE_WEBDAV_PASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --webdav-bearer-token
|
||||
|
||||
Bearer token instead of user/pass (e.g. a Macaroon).
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: bearer_token
|
||||
- Env Var: RCLONE_WEBDAV_BEARER_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -182,23 +192,27 @@ Here are the advanced options specific to webdav (Webdav).
|
|||
|
||||
Command to run to get a bearer token.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: bearer_token_command
|
||||
- Env Var: RCLONE_WEBDAV_BEARER_TOKEN_COMMAND
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --webdav-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Default encoding is Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8 for sharepoint-ntlm or identity otherwise.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_WEBDAV_ENCODING
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --webdav-headers
|
||||
|
||||
|
@ -209,11 +223,13 @@ Use this to set additional HTTP headers for all transactions
|
|||
The input format is comma separated list of key,value pairs. Standard
|
||||
[CSV encoding](https://godoc.org/encoding/csv) may be used.
|
||||
|
||||
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
|
||||
For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
|
||||
|
||||
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: headers
|
||||
- Env Var: RCLONE_WEBDAV_HEADERS
|
||||
- Type: CommaSepList
|
||||
|
|
|
@ -124,10 +124,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_YANDEX_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --yandex-client-secret
|
||||
|
||||
|
@ -135,10 +137,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_YANDEX_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
|
@ -148,10 +152,12 @@ Here are the advanced options specific to yandex (Yandex Disk).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_YANDEX_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --yandex-auth-url
|
||||
|
||||
|
@ -159,10 +165,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_YANDEX_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --yandex-token-url
|
||||
|
||||
|
@ -170,15 +178,19 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_YANDEX_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --yandex-hard-delete
|
||||
|
||||
Delete files permanently rather than putting them into the trash.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hard_delete
|
||||
- Env Var: RCLONE_YANDEX_HARD_DELETE
|
||||
- Type: bool
|
||||
|
@ -186,10 +198,12 @@ Delete files permanently rather than putting them into the trash.
|
|||
|
||||
#### --yandex-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_YANDEX_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
|
@ -135,10 +135,12 @@ OAuth Client Id.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_ZOHO_CLIENT_ID
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --zoho-client-secret
|
||||
|
||||
|
@ -146,10 +148,12 @@ OAuth Client Secret.
|
|||
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_ZOHO_CLIENT_SECRET
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --zoho-region
|
||||
|
||||
|
@ -159,10 +163,12 @@ You'll have to use the region your organization is registered in. If
|
|||
not sure use the same top level domain as you connect to in your
|
||||
browser.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_ZOHO_REGION
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "com"
|
||||
- United states / Global
|
||||
|
@ -181,10 +187,12 @@ Here are the advanced options specific to zoho (Zoho).
|
|||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_ZOHO_TOKEN
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --zoho-auth-url
|
||||
|
||||
|
@ -192,10 +200,12 @@ Auth server URL.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_ZOHO_AUTH_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --zoho-token-url
|
||||
|
||||
|
@ -203,17 +213,21 @@ Token server url.
|
|||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_ZOHO_TOKEN_URL
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Required: false
|
||||
|
||||
#### --zoho-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
The encoding for the backend.
|
||||
|
||||
See the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: encoding
|
||||
- Env Var: RCLONE_ZOHO_ENCODING
|
||||
- Type: MultiEncoder
|
||||
|
|
2
go.mod
2
go.mod
|
@ -8,7 +8,7 @@ require (
|
|||
github.com/Azure/azure-storage-blob-go v0.14.0
|
||||
github.com/Azure/go-autorest/autorest/adal v0.9.17
|
||||
github.com/Azure/go-ntlmssp v0.0.0-20200615164410-66371956d46c
|
||||
github.com/Max-Sum/base32768 v0.0.0-20191205131208-7937843c71d5 // indirect
|
||||
github.com/Max-Sum/base32768 v0.0.0-20191205131208-7937843c71d5
|
||||
github.com/Unknwon/goconfig v0.0.0-20200908083735-df7de6a44db8
|
||||
github.com/a8m/tree v0.0.0-20210414114729-ce3525c5c2ef
|
||||
github.com/aalpar/deheap v0.0.0-20210914013432-0cc84d79dec3
|
||||
|
|
12
go.sum
12
go.sum
|
@ -510,8 +510,6 @@ github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
|
|||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
|
||||
github.com/pkg/sftp v1.13.4 h1:Lb0RYJCmgUcBgZosfoi9Y9sbl6+LJgOIgk/2Y4YjMFg=
|
||||
github.com/pkg/sftp v1.13.4/go.mod h1:LzqnAvaD5TWeNBsZpfKxSYn1MbjWwOsCIAFFJbpIsK8=
|
||||
github.com/pkg/sftp v1.13.5-0.20211228200725-31aac3e1878d h1:7cHNeARnMq3icpbMdvyUELykWM4zOj5NRhH2Y3sfgBc=
|
||||
github.com/pkg/sftp v1.13.5-0.20211228200725-31aac3e1878d/go.mod h1:wHDZ0IZX6JcBYRK1TH9bcVq8G7TLpVHYIGJRFnmPfxg=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
|
@ -543,8 +541,6 @@ github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0
|
|||
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
|
||||
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8 h1:Y258uzXU/potCYnQd1r6wlAnoMB68BiCkCcCnKx1SH8=
|
||||
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8/go.mod h1:bSJjRokAHHOhA+XFxplld8w2R/dXLH7Z3BZ532vhFwU=
|
||||
github.com/rclone/ftp v1.0.0-210902f h1:cm4OxC1S8JARRdEw+CYAyLxg4H+l84STHAdfmK7op2Q=
|
||||
github.com/rclone/ftp v1.0.0-210902f/go.mod h1:2lmrmq866uF2tnje75wQHzmPXhmSWUt7Gyx2vgK1RCU=
|
||||
github.com/rclone/ftp v1.0.0-210902h h1:e9rbDiTdorXRsRtUOdbr6asesJkYZQ9efy1ts5OEBb8=
|
||||
github.com/rclone/ftp v1.0.0-210902h/go.mod h1:GtHgnfXJAx17bmdVU8kiItiUNFkMbFt+sIg0SwAfyx0=
|
||||
github.com/rfjakob/eme v1.1.2 h1:SxziR8msSOElPayZNFfQw4Tjx/Sbaeeh3eRvrHVMUs4=
|
||||
|
@ -708,12 +704,9 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
|
|||
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201112155050-0c6587e931a9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20210415154028-4f45737414dc/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
|
||||
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
|
||||
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa h1:idItI2DDfCokpg0N51B2VtiLdJ4vAuXC9fnCb2gACo4=
|
||||
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3 h1:0es+/5331RGQPcXlMfP+WrnIIS6dNnNRe0WB02W0F4M=
|
||||
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
|
@ -804,8 +797,6 @@ golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qx
|
|||
golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20210614182718-04defd469f4e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20211109214657-ef0fda0de508 h1:v3NKo+t/Kc3EASxaKZ82lwK6mCf4ZeObQBduYFZHo7c=
|
||||
golang.org/x/net v0.0.0-20211109214657-ef0fda0de508/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
|
@ -905,7 +896,6 @@ golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7w
|
|||
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210415045647-66c3f260301c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
|
@ -921,8 +911,6 @@ golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBc
|
|||
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211013075003-97ac67df715c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211109184856-51b60fd695b3 h1:T6tyxxvHMj2L1R2kZg0uNMpS8ZhB9lRa9XRGTCSA65w=
|
||||
golang.org/x/sys v0.0.0-20211109184856-51b60fd695b3/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e h1:fLOSk5Q00efkSvAm+4xcoXD+RRmLmmulPn5I3Y9F2EM=
|
||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
|
|
Loading…
Reference in New Issue
Block a user