2017-07-25 22:18:13 +08:00
|
|
|
---
|
|
|
|
title: "Microsoft Azure Blob Storage"
|
|
|
|
description: "Rclone docs for Microsoft Azure Blob Storage"
|
|
|
|
date: "2017-07-30"
|
|
|
|
---
|
|
|
|
|
|
|
|
<i class="fa fa-windows"></i> Microsoft Azure Blob Storage
|
|
|
|
-----------------------------------------
|
|
|
|
|
|
|
|
Paths are specified as `remote:container` (or `remote:` for the `lsd`
|
|
|
|
command.) You may put subdirectories in too, eg
|
|
|
|
`remote:container/path/to/dir`.
|
|
|
|
|
|
|
|
Here is an example of making a Microsoft Azure Blob Storage
|
|
|
|
configuration. For a remote called `remote`. First run:
|
|
|
|
|
|
|
|
rclone config
|
|
|
|
|
|
|
|
This will guide you through an interactive setup process:
|
|
|
|
|
|
|
|
```
|
|
|
|
No remotes found - make a new one
|
|
|
|
n) New remote
|
|
|
|
s) Set configuration password
|
|
|
|
q) Quit config
|
|
|
|
n/s/q> n
|
|
|
|
name> remote
|
|
|
|
Type of storage to configure.
|
|
|
|
Choose a number from below, or type in your own value
|
|
|
|
1 / Amazon Drive
|
|
|
|
\ "amazon cloud drive"
|
|
|
|
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
|
|
|
|
\ "s3"
|
|
|
|
3 / Backblaze B2
|
|
|
|
\ "b2"
|
|
|
|
4 / Box
|
|
|
|
\ "box"
|
|
|
|
5 / Dropbox
|
|
|
|
\ "dropbox"
|
|
|
|
6 / Encrypt/Decrypt a remote
|
|
|
|
\ "crypt"
|
|
|
|
7 / FTP Connection
|
|
|
|
\ "ftp"
|
|
|
|
8 / Google Cloud Storage (this is not Google Drive)
|
|
|
|
\ "google cloud storage"
|
|
|
|
9 / Google Drive
|
|
|
|
\ "drive"
|
|
|
|
10 / Hubic
|
|
|
|
\ "hubic"
|
|
|
|
11 / Local Disk
|
|
|
|
\ "local"
|
|
|
|
12 / Microsoft Azure Blob Storage
|
|
|
|
\ "azureblob"
|
|
|
|
13 / Microsoft OneDrive
|
|
|
|
\ "onedrive"
|
|
|
|
14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
|
|
|
\ "swift"
|
|
|
|
15 / SSH/SFTP Connection
|
|
|
|
\ "sftp"
|
|
|
|
16 / Yandex Disk
|
|
|
|
\ "yandex"
|
|
|
|
17 / http Connection
|
|
|
|
\ "http"
|
|
|
|
Storage> azureblob
|
|
|
|
Storage Account Name
|
|
|
|
account> account_name
|
|
|
|
Storage Account Key
|
|
|
|
key> base64encodedkey==
|
|
|
|
Endpoint for the service - leave blank normally.
|
|
|
|
endpoint>
|
|
|
|
Remote config
|
|
|
|
--------------------
|
|
|
|
[remote]
|
|
|
|
account = account_name
|
|
|
|
key = base64encodedkey==
|
|
|
|
endpoint =
|
|
|
|
--------------------
|
|
|
|
y) Yes this is OK
|
|
|
|
e) Edit this remote
|
|
|
|
d) Delete this remote
|
|
|
|
y/e/d> y
|
|
|
|
```
|
|
|
|
|
|
|
|
See all containers
|
|
|
|
|
|
|
|
rclone lsd remote:
|
|
|
|
|
|
|
|
Make a new container
|
|
|
|
|
|
|
|
rclone mkdir remote:container
|
|
|
|
|
|
|
|
List the contents of a container
|
|
|
|
|
|
|
|
rclone ls remote:container
|
|
|
|
|
|
|
|
Sync `/home/local/directory` to the remote container, deleting any excess
|
|
|
|
files in the container.
|
|
|
|
|
|
|
|
rclone sync /home/local/directory remote:container
|
|
|
|
|
|
|
|
### --fast-list ###
|
|
|
|
|
|
|
|
This remote supports `--fast-list` which allows you to use fewer
|
|
|
|
transactions in exchange for more memory. See the [rclone
|
|
|
|
docs](/docs/#fast-list) for more details.
|
|
|
|
|
|
|
|
### Modified time ###
|
|
|
|
|
|
|
|
The modified time is stored as metadata on the object with the `mtime`
|
|
|
|
key. It is stored using RFC3339 Format time with nanosecond
|
|
|
|
precision. The metadata is supplied during directory listings so
|
|
|
|
there is no overhead to using it.
|
|
|
|
|
|
|
|
### Hashes ###
|
|
|
|
|
2017-08-05 06:54:57 +08:00
|
|
|
MD5 hashes are stored with blobs. However blobs that were uploaded in
|
|
|
|
chunks only have an MD5 if the source remote was capable of MD5
|
|
|
|
hashes, eg the local disk.
|
2017-07-25 22:18:13 +08:00
|
|
|
|
2018-07-05 16:39:06 +08:00
|
|
|
### Authenticating with Azure Blob Storage
|
|
|
|
|
|
|
|
Rclone has 3 ways of authenticating with Azure Blob Storage:
|
|
|
|
|
|
|
|
#### Account and Key
|
|
|
|
|
|
|
|
This is the most straight forward and least flexible way. Just fill in the `account` and `key` lines and leave the rest blank.
|
|
|
|
|
|
|
|
#### SAS URL
|
|
|
|
|
2018-07-13 23:21:49 +08:00
|
|
|
This can be an account level SAS URL or container level SAS URL
|
2018-07-05 16:39:06 +08:00
|
|
|
|
2018-07-13 23:21:49 +08:00
|
|
|
To use it leave `account`, `key` blank and fill in `sas_url`.
|
2018-07-05 16:39:06 +08:00
|
|
|
|
2018-07-13 23:21:49 +08:00
|
|
|
Account level SAS URL or container level SAS URL can be obtained from Azure portal or Azure Storage Explorer.
|
2018-07-05 16:39:06 +08:00
|
|
|
To get a container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal.
|
|
|
|
|
2018-07-13 23:21:49 +08:00
|
|
|
If You use container level SAS URL, rclone operations are permitted only on particular container, eg
|
|
|
|
|
|
|
|
rclone ls azureblob:container or rclone ls azureblob:
|
2018-07-05 16:39:06 +08:00
|
|
|
|
2018-07-13 23:21:49 +08:00
|
|
|
Since container name already exists in SAS URL, you can leave it empty as well.
|
2018-07-05 16:39:06 +08:00
|
|
|
|
|
|
|
However these will not work
|
|
|
|
|
|
|
|
rclone lsd azureblob:
|
|
|
|
rclone ls azureblob:othercontainer
|
|
|
|
|
|
|
|
This would be useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment.
|
|
|
|
|
2017-07-25 22:18:13 +08:00
|
|
|
### Multipart uploads ###
|
|
|
|
|
|
|
|
Rclone supports multipart uploads with Azure Blob storage. Files
|
|
|
|
bigger than 256MB will be uploaded using chunked upload by default.
|
|
|
|
|
|
|
|
The files will be uploaded in parallel in 4MB chunks (by default).
|
|
|
|
Note that these chunks are buffered in memory and there may be up to
|
|
|
|
`--transfers` of them being uploaded at once.
|
|
|
|
|
|
|
|
Files can't be split into more than 50,000 chunks so by default, so
|
|
|
|
the largest file that can be uploaded with 4MB chunk size is 195GB.
|
|
|
|
Above this rclone will double the chunk size until it creates less
|
|
|
|
than 50,000 chunks. By default this will mean a maximum file size of
|
|
|
|
3.2TB can be uploaded. This can be raised to 5TB using
|
|
|
|
`--azureblob-chunk-size 100M`.
|
|
|
|
|
|
|
|
Note that rclone doesn't commit the block list until the end of the
|
|
|
|
upload which means that there is a limit of 9.5TB of multipart uploads
|
|
|
|
in progress as Azure won't allow more than that amount of uncommitted
|
|
|
|
blocks.
|
|
|
|
|
2018-10-02 03:48:54 +08:00
|
|
|
<!--- autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs -->
|
|
|
|
### Standard Options
|
|
|
|
|
|
|
|
Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
|
|
|
|
|
|
|
|
#### --azureblob-account
|
|
|
|
|
|
|
|
Storage Account Name (leave blank to use connection string or SAS URL)
|
|
|
|
|
|
|
|
- Config: account
|
|
|
|
- Env Var: RCLONE_AZUREBLOB_ACCOUNT
|
|
|
|
- Type: string
|
|
|
|
- Default: ""
|
|
|
|
|
|
|
|
#### --azureblob-key
|
|
|
|
|
|
|
|
Storage Account Key (leave blank to use connection string or SAS URL)
|
|
|
|
|
|
|
|
- Config: key
|
|
|
|
- Env Var: RCLONE_AZUREBLOB_KEY
|
|
|
|
- Type: string
|
|
|
|
- Default: ""
|
|
|
|
|
|
|
|
#### --azureblob-sas-url
|
|
|
|
|
|
|
|
SAS URL for container level access only
|
|
|
|
(leave blank if using account/key or connection string)
|
|
|
|
|
|
|
|
- Config: sas_url
|
|
|
|
- Env Var: RCLONE_AZUREBLOB_SAS_URL
|
|
|
|
- Type: string
|
|
|
|
- Default: ""
|
|
|
|
|
|
|
|
### Advanced Options
|
|
|
|
|
|
|
|
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
|
|
|
|
|
|
|
|
#### --azureblob-endpoint
|
|
|
|
|
|
|
|
Endpoint for the service
|
|
|
|
Leave blank normally.
|
|
|
|
|
|
|
|
- Config: endpoint
|
|
|
|
- Env Var: RCLONE_AZUREBLOB_ENDPOINT
|
|
|
|
- Type: string
|
|
|
|
- Default: ""
|
|
|
|
|
|
|
|
#### --azureblob-upload-cutoff
|
|
|
|
|
|
|
|
Cutoff for switching to chunked upload (<= 256MB).
|
|
|
|
|
|
|
|
- Config: upload_cutoff
|
|
|
|
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
|
|
|
|
- Type: SizeSuffix
|
|
|
|
- Default: 256M
|
|
|
|
|
|
|
|
#### --azureblob-chunk-size
|
|
|
|
|
|
|
|
Upload chunk size (<= 100MB).
|
|
|
|
|
|
|
|
Note that this is stored in memory and there may be up to
|
|
|
|
"--transfers" chunks stored at once in memory.
|
|
|
|
|
|
|
|
- Config: chunk_size
|
|
|
|
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
|
|
|
|
- Type: SizeSuffix
|
|
|
|
- Default: 4M
|
|
|
|
|
|
|
|
#### --azureblob-list-chunk
|
|
|
|
|
|
|
|
Size of blob list.
|
|
|
|
|
|
|
|
This sets the number of blobs requested in each listing chunk. Default
|
|
|
|
is the maximum, 5000. "List blobs" requests are permitted 2 minutes
|
|
|
|
per megabyte to complete. If an operation is taking longer than 2
|
|
|
|
minutes per megabyte on average, it will time out (
|
|
|
|
[source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval)
|
|
|
|
). This can be used to limit the number of blobs items to return, to
|
|
|
|
avoid the time out.
|
|
|
|
|
|
|
|
- Config: list_chunk
|
|
|
|
- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
|
|
|
|
- Type: int
|
|
|
|
- Default: 5000
|
|
|
|
|
|
|
|
#### --azureblob-access-tier
|
|
|
|
|
|
|
|
Access tier of blob: hot, cool or archive.
|
|
|
|
|
|
|
|
Archived blobs can be restored by setting access tier to hot or
|
|
|
|
cool. Leave blank if you intend to use default access tier, which is
|
|
|
|
set at account level
|
|
|
|
|
|
|
|
If there is no "access tier" specified, rclone doesn't apply any tier.
|
|
|
|
rclone performs "Set Tier" operation on blobs while uploading, if objects
|
|
|
|
are not modified, specifying "access tier" to new one will have no effect.
|
|
|
|
If blobs are in "archive tier" at remote, trying to perform data transfer
|
|
|
|
operations from remote will not be allowed. User should first restore by
|
|
|
|
tiering blob to "Hot" or "Cool".
|
|
|
|
|
|
|
|
- Config: access_tier
|
|
|
|
- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
|
|
|
|
- Type: string
|
|
|
|
- Default: ""
|
|
|
|
|
|
|
|
<!--- autogenerated options stop -->
|
2018-08-20 00:53:59 +08:00
|
|
|
|
2017-07-25 22:18:13 +08:00
|
|
|
### Limitations ###
|
|
|
|
|
|
|
|
MD5 sums are only uploaded with chunked files if the source has an MD5
|
|
|
|
sum. This will always be the case for a local to azure copy.
|