mirror of
https://github.com/rclone/rclone.git
synced 2024-11-28 03:09:48 +08:00
212 lines
5.1 KiB
Markdown
212 lines
5.1 KiB
Markdown
---
|
|
title: "Amazon S3"
|
|
description: "Rclone docs for Amazon S3"
|
|
date: "2014-04-26"
|
|
---
|
|
|
|
<i class="fa fa-amazon"></i> Amazon S3
|
|
---------------------------------------
|
|
|
|
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
|
|
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
|
|
|
|
Here is an example of making an s3 configuration. First run
|
|
|
|
rclone config
|
|
|
|
This will guide you through an interactive setup process.
|
|
|
|
```
|
|
No remotes found - make a new one
|
|
n) New remote
|
|
q) Quit config
|
|
n/q> n
|
|
name> remote
|
|
What type of source is it?
|
|
Choose a number from below
|
|
1) swift
|
|
2) s3
|
|
3) local
|
|
4) google cloud storage
|
|
5) dropbox
|
|
6) drive
|
|
type> 2
|
|
AWS Access Key ID.
|
|
access_key_id> accesskey
|
|
AWS Secret Access Key (password).
|
|
secret_access_key> secretaccesskey
|
|
Region to connect to.
|
|
Choose a number from below, or type in your own value
|
|
* The default endpoint - a good choice if you are unsure.
|
|
* US Region, Northern Virginia or Pacific Northwest.
|
|
* Leave location constraint empty.
|
|
1) us-east-1
|
|
* US West (Oregon) Region
|
|
* Needs location constraint us-west-2.
|
|
2) us-west-2
|
|
[snip]
|
|
* South America (Sao Paulo) Region
|
|
* Needs location constraint sa-east-1.
|
|
9) sa-east-1
|
|
* If using an S3 clone that only understands v2 signatures - eg Ceph - set this and make sure you set the endpoint.
|
|
10) other-v2-signature
|
|
* If using an S3 clone that understands v4 signatures set this and make sure you set the endpoint.
|
|
11) other-v4-signature
|
|
region> 1
|
|
Endpoint for S3 API.
|
|
Leave blank if using AWS to use the default endpoint for the region.
|
|
Specify if using an S3 clone such as Ceph.
|
|
endpoint>
|
|
Location constraint - must be set to match the Region. Used when creating buckets only.
|
|
Choose a number from below, or type in your own value
|
|
* Empty for US Region, Northern Virginia or Pacific Northwest.
|
|
1)
|
|
* US West (Oregon) Region.
|
|
2) us-west-2
|
|
* US West (Northern California) Region.
|
|
3) us-west-1
|
|
* EU (Ireland) Region.
|
|
4) eu-west-1
|
|
[snip]
|
|
location_constraint> 1
|
|
Remote config
|
|
--------------------
|
|
[remote]
|
|
access_key_id = accesskey
|
|
secret_access_key = secretaccesskey
|
|
region = us-east-1
|
|
endpoint =
|
|
location_constraint =
|
|
--------------------
|
|
y) Yes this is OK
|
|
e) Edit this remote
|
|
d) Delete this remote
|
|
y/e/d> y
|
|
Current remotes:
|
|
|
|
Name Type
|
|
==== ====
|
|
remote s3
|
|
|
|
e) Edit existing remote
|
|
n) New remote
|
|
d) Delete remote
|
|
q) Quit config
|
|
e/n/d/q> q
|
|
```
|
|
|
|
This remote is called `remote` and can now be used like this
|
|
|
|
See all buckets
|
|
|
|
rclone lsd remote:
|
|
|
|
Make a new bucket
|
|
|
|
rclone mkdir remote:bucket
|
|
|
|
List the contents of a bucket
|
|
|
|
rclone ls remote:bucket
|
|
|
|
Sync `/home/local/directory` to the remote bucket, deleting any excess
|
|
files in the bucket.
|
|
|
|
rclone sync /home/local/directory remote:bucket
|
|
|
|
### Modified time ###
|
|
|
|
The modified time is stored as metadata on the object as
|
|
`X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns.
|
|
|
|
### Multipart uploads ###
|
|
|
|
rclone supports multipart uploads with S3 which means that it can
|
|
upload files bigger than 5GB. Note that files uploaded with multipart
|
|
upload don't have an MD5SUM.
|
|
|
|
### Buckets and Regions ###
|
|
|
|
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
|
|
but you can only access the content of a bucket from the region it was
|
|
created in. If you attempt to access a bucket from the wrong region,
|
|
you will get an error, `incorrect region, the bucket is not in 'XXX'
|
|
region`.
|
|
|
|
### Anonymous access to public buckets ###
|
|
|
|
If you want to use rclone to access a public bucket, configure with a
|
|
blank `access_key_id` and `secret_access_key`. Eg
|
|
|
|
```
|
|
e) Edit existing remote
|
|
n) New remote
|
|
d) Delete remote
|
|
q) Quit config
|
|
e/n/d/q> n
|
|
name> anons3
|
|
What type of source is it?
|
|
Choose a number from below
|
|
1) amazon cloud drive
|
|
2) drive
|
|
3) dropbox
|
|
4) google cloud storage
|
|
5) local
|
|
6) s3
|
|
7) swift
|
|
type> 6
|
|
AWS Access Key ID - leave blank for anonymous access.
|
|
access_key_id>
|
|
AWS Secret Access Key (password) - leave blank for anonymous access.
|
|
secret_access_key>
|
|
Region to connect to.
|
|
region> 1
|
|
endpoint>
|
|
location_constraint>
|
|
```
|
|
|
|
Then use it as normal with the name of the public bucket, eg
|
|
|
|
rclone lsd anons3:1000genomes
|
|
|
|
You will be able to list and copy data but not upload it.
|
|
|
|
### Ceph ###
|
|
|
|
Ceph is an object storage system which presents an Amazon S3 interface.
|
|
|
|
To use rclone with ceph, you need to set the following parameters in
|
|
the config.
|
|
|
|
```
|
|
access_key_id = Whatever
|
|
secret_access_key = Whatever
|
|
endpoint = https://ceph.endpoint.goes.here/
|
|
region = other-v2-signature
|
|
```
|
|
|
|
Note also that Ceph sometimes puts `/` in the passwords it gives
|
|
users. If you read the secret access key using the command line tools
|
|
you will get a JSON blob with the `/` escaped as `\/`. Make sure you
|
|
only write `/` in the secret access key.
|
|
|
|
Eg the dump from Ceph looks something like this (irrelevant keys
|
|
removed).
|
|
|
|
```
|
|
{
|
|
"user_id": "xxx",
|
|
"display_name": "xxxx",
|
|
"keys": [
|
|
{
|
|
"user": "xxx",
|
|
"access_key": "xxxxxx",
|
|
"secret_key": "xxxxxx\/xxxx"
|
|
}
|
|
],
|
|
}
|
|
```
|
|
|
|
Because this is a json dump, it is encoding the `/` as `\/`, so if you
|
|
use the secret key as `xxxxxx/xxxx` it will work fine.
|